Headlines You Should Know
World Economic Forum Cites AI-Generated Misinformation as Top Risk
The World Economic Forum (WEF) issued a report based on a survey of more than 1,400 global risk experts, policy-makers, and industry leaders who say in the next two years, AI-generated misinformation will be the biggest risk the world faces.
It’s clear why this is a big problem now — more than 50 countries will have elections this year, including a presidential election here in the U.S., and news organizations are, at best, unclear about how they’ve used AI and how much oversight they have over AI content. NewsGuard has identified more than 600 “unreliable AI-generated news” websites in 15 languages and counting.
The sprawl of AI content has created new dilemmas for the C-suite: whether and how to disclose their company’s use of AI, how to prevent harmful or malicious use of the technology, and how to protect their reputation if misinformation targets their business.
For its part, OpenAI published a blog post Monday about how it approaches this year’s election cycle and plans to prevent abuse. Other companies are using AI to fight misinformation, whether it’s AI-generated or otherwise. Of course, over the next two years and beyond, humans will have to have their senses heightened to spot AI misinformation, and AI-generated content in general.
Elsewhere …
- Anthropic Researchers Find AI Models Can Be Trained to Deceive
- IMF Warns Massive AI Growth Will Hit Jobs, as CEOs Agree Things Must Change
- AI-Generated George Carlin Comedy Special Slammed by Comedian’s Daughter
- PODCAST: AI on Trial – The New York Times vs. OpenAI and Microsoft
Tips and Tricks
Can you spot AI-generated content?
What’s happening: As the risk of AI misinformation grows, there’s an onus on humans to recognize what AI content looks like. There’s a market for AI detectors…but experts say they don’t work, particularly in education, where students have refined prompts to make content seem more genuine. There’s plenty of bad AI content, though, and it’s only continuing to flood the internet.
A Deloitte report finds that only 22% of business leaders are “highly or very highly prepared” for generative AI adoption, so they’re even less likely to have an eye for spotting AI content. Here are a couple of high-level pointers.
AI writing tells: If you see a piece of content that starts with, “In the ever-evolving landscape of…” congratulations, you’ve probably found something generated by AI. This phrase is very common in the AI chatbot lexicon. So are verbose language, repetitive phrases, uniform sentence structure, and, strangely, metaphors about beacons, orchestras, and whirlwinds.
AI image/video tells: Spotting images and video is getting harder. Remember when the internet was duped by that photo of the pope in a puffer jacket? There are usually clues in the details. The Better Business Bureau recommends zooming in to find inconsistencies and asymmetries. These are usually manifested in lighting, textures, and, in the case of video, flickers or quick jump cuts.
Quote of the Week
“I am uncomfortable growing Tesla to be a leader in AI & robotics without having ~25% voting control. Enough to be influential, but not so much that I can’t be overturned.
Unless that is the case, I would prefer to build products outside of Tesla.”
— Elon Musk on X, responding to theories about his motivation for advancing AI at Tesla