Headlines You Should Know
Â
Where’s the Ethics Line with Generative AI?
In a little over a year since ChatGPT burst onto the scene, AI has been used to write a lot of copy. Communications pros have started leveraging it for work, students have gotten an extra hand to draft papers and reports, and in Brazil, ChatGPT was used to draft legislation in no time (and it passed unanimously). But what about AI beyond the written word?Â
It can be hard to spot AI voices or videos. There was backlash two years ago when AI was used to simulate Anthony Bourdain’s voice in a documentary about the late chef and adventurer that also included real voiceovers from Bourdain. Hollywood actors won safeguards around AI use in the recent SAG-AFTRA contract so they don’t lose potential earnings. In other cases, it’s easier to detect fiction — “It’s a Wonderful Life” star Jimmy Stewart is set to narrate “It’s a Wonderful Sleep Story” for the app Calm despite having died more than 25 years ago. Calm received consent from Stewart’s family, and clearly the real Jimmy Stewart isn’t losing work due to AI.
These new AI uses force us to revisit the question of ethics we first pondered when ChatGPT started producing content. As AI tools get better at generating audio and video, and matching the voice and appearance of real people, not every brand will proceed as carefully as Calm. We’re going to need policies to protect actors, celebrities, employees, and others.Â
Should a brand generate AI versions of public figures without permission? If you host a podcast for your employer, should the company be allowed to keep using your voice even after you switch jobs? Start thinking about what policies your company might want to put in place, and if or how you disclose your use of AI.
Elsewhere …
- Google Delays Biggest AI Launch of the Year, but It’s Still Coming Soon
- AI Takes Center Stage at COP28 as Experts Debate Tech’s Climate Impact
- LISTEN: AI’s Disruptive Magic in Journalism
- Survey Shows Power of AI in Retail
- ChatGPT Gives Longer Responses if You ‘Tip it $200,’ According to User
Tips and Tricks
 Perks of the outline-to-draft methodÂ
What’s happening: Content calendars are getting clogged and time is running out in the race to the end of the year. AI can be a helpful tool to create blog posts or op-ed articles faster, but if you don’t get the result you’re looking for, you’ve only wasted more precious time.
Getting the right response: Having AI chatbots like ChatGPT and Claude begin a piece of content by developing an outline can be a helpful workflow and an efficient way to produce quality content quickly.Â
It may sound like an extra step, but seeing the plan of attack before AI produces a whole draft can save time. Additionally, working through content piece by piece (after you’ve approved the outline) gives you more control along the way so you can ultimately get the quality you need.
Try it out: Start by giving AI all the context you can — instructions on the task, examples of similar content, and material to draw from like an interview transcript or stream-of-consciousness commentary.Â
Then, ask it to use all those resources to develop an outline for the piece. Tinker with the order of the outline or add or remove elements until you have something you’re happy with. From there, ask the chatbot to help you flesh out the piece section by section.
Quote of the Week
“Throughout this whole thing, we did not lose a single employee, a single customer. Not only did they keep the products up even in the face of very difficult-to-manage growth, they also shipped new features. Research progress continued.”
— OpenAI Co-Founder and CEO (again) Sam Altman in an exclusive interview with The Verge