Headlines You Should Know

McKinsey Survey: AI Adoption is Widespread, But Risks Not Addressed
Whether they dove in head-first or only dipped a toe in the water, research released this morning shows 79% of people have at least tried generative AI. Marketing and sales are the organizational functions that have adopted AI the most, but is it being used responsibly? McKinsey’s survey shows that while inaccuracy is the top organizational concern (56%), only 32% are working to mitigate that risk. More on how to do that below.

Researchers ‘Jailbreak’ AI Chatbots to Evade Safeguards
Many of the generative AI platforms people use daily are less than a year old. Their creators are evolving the models and platforms — and researchers at Carnegie Mellon University and the Center for AI Safety just showed why those efforts are essential.

WATCH: A Brief History of LLMs and How AI Works
University of Pennsylvania’s Wharton Interactive Faculty Director, Ethan Mollick, and Director of Pedagogy, Lilach Mollick, launched a video series yesterday that speaks to practical AI. Videos aimed at educators and students will go live every day this week, but the lessons go beyond the classroom. The first video is a digestible look at a complicated topic: What’s under the hood of all these AI platforms?
Elsewhere …
- Photoshop’s New Generative AI Feature Lets You ‘Uncrop’ Images
- Yes, Showrunner AI Is Terrible – But Could Be the Future of TV
- Listen to Snoop Dogg and Gwyneth Paltrow Read You the News
- Meta Prepares AI-Powered Chatbots in Attempt to Retain Users
Tips and Tricks
What to Do with AI Outputs
What’s new: We knew “hallucinations” are a problem, but McKinsey’s survey highlights the disconnect between the risk and what anyone is doing about it. Left to its own devices, AI will produce plenty of inaccurate content. But you can make it better both before and after submitting a prompt.
Why it matters: There may not be a better example of the need to double-check AI’s work than two New York lawyers who cited nonexistent cases that ChatGPT totally made up. The lawyers were fined $5,000, but it’s easy to see how much worse it could be. Suppose a doctor’s office cites a fabricated disease or astronomers use AI to miscalculate how far away an asteroid is (after all, GPT-4 reportedly identified prime numbers 98% of the time in March and only 2% in June).
Try this: Before you even see an output, you can minimize hallucinations by giving the AI model as much information as possible up front. If you point it to the data source you want it to refer to, you’re more likely to get accurate results. Regardless of what the model spits out, it’s imperative to double-check any assertions to make sure they’re accurate.
How to Define (and Replicate) a Voice
What’s new: The early days were about “prompt engineering,” using just the right language to generate a passable output. With more ways to use ChatGPT and other platforms, we can generate more nuanced outputs based on a person’s (or company’s) voice.
Why it matters: Replicating tone and style is essential to building and reinforcing a brand, and generative AI can seamlessly weave both into written copy. Plugins like LinkReader and WebPilot allow ChatGPT to ingest examples and copy elements from them. Anthropic’s Claude accepts document uploads and can perform the same task.
Try this: Let’s say you’re writing a partnership press release and you’ve already written three this year. If you point the AI platform to those previous examples and ask it to replicate the style and tone in a new release (and offer the new partnership details, of course), you can leverage AI to do most of the work in seconds.