Headlines You Should Know

 

Why AI Tools Need Your Skills

News publications, corporations, the public, and even lawyers are increasingly turning to AI to produce content. If any of those entities don’t value quality, unchecked AI outputs have the potential to be embarrassingly incorrect. 

For instance, yesterday, a judge in British Columbia reprimanded a lawyer who cited two cases fabricated by ChatGPT. “As this case has unfortunately made clear,” the judge wrote in his final comment, “generative AI is still no substitute for the professional expertise that the justice system requires of lawyers.”

That’s also true of writers, artists, and graphic designers. AI tools are producing more complex and realistic images — the latest model from Stable Diffusion has garnered some positive press in the past week — but there’s still a need for human judgment, expertise, and common sense because many of the tools are flawed.

Google recently released a new Gemini model, which can used to generate images, but there were so many problems with it that Google temporarily shut it down. And DALL-E routinely spells things wrong in images (in the image above, DALL-E wouldn’t spell “hallucinations” right no matter how many times I prompted, so I edited the speech bubble in Canva). 

It comes down to this: AI tools can offer tremendous value if you use them correctly and don’t expect them to do everything for you. They can help non-writers write better, and non-artistic folks produce visual elements quickly. AI can fill a skill gap but won’t replace content creators. At least not yet.

Elsewhere …

Tips and Tricks

🙏 Please be kind to the AI mastermind

What’s happening: A new study suggests a correlation between politeness levels in large language model (LLM) prompts and the quality of AI outputs. 

Why it’s important: Everyone wants a leg up when using AI tools, and it’s not that difficult to be nice to your chatbot of choice (OK, sometimes it seems pretty difficult). But here’s the key sentence in the study: “We observed that impolite prompts often result in poor performance, but overly polite language does not guarantee better outcomes.”

Try this: The study suggests that polite environments make people feel relaxed and that LLMs operate the same way. While the study doesn’t offer a blueprint for just how polite to be, a best practice is treating it like a human assistant — throwing in a “please” and explaining why you’re asking for something is more likely to produce a positive result than being demanding and not offering context as to how one particular prompt plays a role in the objective of your larger chat string.

Quote of the Week

“Rather than becoming mere caretakers or servants of machines, human workers need to develop new skills that can leverage, complement, and lead AI, achieving the enhanced outcomes.”

— Na Fu, Professor at Trinity Business School in Ireland, to BBC on the future of work