Each week we’ll gather headlines and tips to keep you current with how generative AI affects PR and the world at large. If you have ideas on how to improve the newsletter, let us know!

What You Should Know

 

How Tech Companies Aim to Solve the AI Power Problem

The more artificial intelligence proliferates, the more power it consumes. Goldman Sachs projects AI will drive a 160% power demand in data centers by 2030. 

But we don’t have to be resigned to a fate of increased carbon emissions. Tech companies are making investments that allow AI to be a sustainable technology.

Sustainable Metal Cloud, a partner of chipmaker Nvidia, says it can help reverse the power consumption trend with immersion cooling technology that halves energy consumption and is 28% cheaper than traditional cooling methods. Sage Geosystems is working with Meta to develop geothermal energy that would help power data centers and OpenAI is negotiating with Helion, a startup that its CEO, Sam Altman, has backed, to develop clean, fusion energy for data centers.

AI itself can even help solve the challenge of consuming an outsized portion of the power grid. According to SAP, AI-powered sensor networks can optimize power output for smart grids and help schedule, plan, execute, and monitor changes in energy demand to help providers manage the load.

By highlighting these efforts, we can help shift public perception and encourage further innovation. As communicators, we have the power to shape this narrative, emphasizing the importance of responsible innovation and the potential for technology to address its own challenges.

Elsewhere …

Tips and Tricks

📑 Review your AI policy

What’s happening: You do have an AI policy, right? If not, you should. According to Muck Rack, most PR firms (50%) don’t, but those that do tend to stick to them (58% strictly follow them). 

Why it matters: Effective AI policies help mitigate risks associated with AI deployment like privacy concerns and transparency issues. Miscommunication or poorly written policies can lead to misunderstandings or non-compliance, which can damage an organization’s reputation.

AI policies can also act as a means of quality assurance. For instance, every AI policy should include the guideline that any AI output should be reviewed by a human before it is published or shared with a client. Using AI responsibly allows your organization to capture the most value from it.

What to update: New AI tools and uses are always emerging. Six months ago, we weren’t sharing links to our AI prompt strings yet. One potential new policy would be to limit sharing those prompt links to just inside your organization, unless you’re sharing prompts about a client with that client.  

As new tools come onto the scene, make sure to check their data privacy policies. If your prompts are automatically used to train the next AI model, you’ll want to be very careful with what information you put in them. Your policy should also include language to share with external parties like clients, disclosing that and how you use AI in your work. This transparency builds trust with your partners and makes you a better steward of the technology than a firm that passes AI work off as human work.

Quote of the Week

“In the past we’ve prioritized developing unique insights from data to spark sports debates — using data like scores, stats, and serve speeds to create predictions such as ‘the likelihood to win.’ Now, we’re focused on delivering personalized US Open digital content with the help of generative AI technologies — for example, different fan personas and different styles of content like short and long form.”

— Kristi Kolski, Marketing Program Director at IBM Sports and Entertainment Partnerships, to Digiday on using AI at the US Open

How Was This Newsletter?

😀 😐 🙁