Each week we’ll gather headlines and tips to keep you current with how generative AI affects PR and the world at large. If you have ideas on how to improve the newsletter, let us know!
What You Should Know
What Will GPT-5 Bring?
In the AI world, the biggest blockbuster of the season won’t be Superman or an eighth Jurassic Park movie. It will be GPT-5, the much-awaited model from OpenAI, which CEO Sam Altman said will be launched “probably sometime this summer,” on the company’s podcast.
What makes GPT‑5 especially relevant for communicators is its reported ability to unify tasks that previously required hopping between models. Today, you might use one tool to extract insights from an earnings call and another to write a polished blog post or press release. GPT‑5 should be able to handle both without toggling or tradeoffs. The model is expected to work across text, image, audio, and even video, while offering stronger reasoning and longer memory. That means fewer barriers between idea and execution, whether you’re summarizing stakeholder feedback or generating pitch decks.
Communications pros are often the heaviest AI users in their orgs and the go-to resource when others need help getting up to speed. GPT‑5, with more power and less friction, could make that advocacy easier. If the model lives up to the hype, your colleagues may find it more intuitive and more likely to surface new, unexpected applications for AI in communications.
Elsewhere …
- The New PR Playbook for an AI-Driven Media Landscape
- California Set to Become First US State to Manage Power Outages with AI
- Anthropic Launches its First Big Disruption to the Finance Industry
- How the Low-Vision Community Embraced AI Smart Glasses
- Humans Are Starting to Talk More Like ChatGPT, Study Claims
- Anthropic, Google, OpenAI, xAI Granted up to $200 Million for AI Work from Defense Department
- NotebookLM Adds Featured Notebooks from The Economist, The Atlantic, Others
Did you really check out that link?
What’s happening: Experts agree that AI will never stop hallucinating, but you can reduce those instances by offering as much context as possible. A common way to do that is by providing links to AI chatbots like ChatGPT and Claude that can access the internet. However, sometimes they don’t actually visit those links — a whole new, annoying generation of hallucinations.
How to spot it: Don’t assume an AI tool actually visited a link. In most instances, it will tell you that it can’t access a link due to a blocker or robots.txt file that the website implemented to keep AI out. Occasionally, it will also try to extrapolate what a web page contains based on its URL.
Keep a critical eye on outputs that reference web pages, and look for signs that the tool actually visited the page. ChatGPT will include clickable buttons to links when it references webpages and will say “Searching the web” as it thinks about its response. Claude will say “fetching” while it’s doing its search, and “fetched” when complete.
Try this: When you offer a link in your prompt, before you ask questions about the material, make your prompt much simpler: “Can you see this page?” Start there with nothing else to get a quick answer and make sure the tool is actually drawing information from the page before a hallucination can potentially be woven into a broader response that encompasses the other context you offered.
Quote of the Week
“Unless and until agents really do work at expert level, the benefits of AI use are going to be contingent on the skills of the AI user, the jagged abilities of the AI you use, the process into which you integrate use, the experience you have with the AI system & the task itself.”
— Ethan Mollick, author and Professor of Management at Wharton, in a post on X
How Was This Newsletter?