What You Should Know

 

What’s Happening With AI Search?

OpenAI is adding a new function for its Swiss Army tool, ChatGPT. Thursday, the company announced it was testing SearchGPT, “a temporary prototype of new AI search features that give you fast and timely answers with clear and relevant sources.” Access to the prototype is limited to 10,000 users (here’s a link to join the waitlist).

SearchGPT is designed to answer questions directly instead of listing links like Google and other traditional search engines do. In OpenAI’s screenshots and demo videos, the prototype looks quite like Perplexity.ai, an AI research tool that has gained steam among users and criticism from publishers. 

In April, Perplexity said it was serving 169 million queries per month, traffic that piques the interest of news publishers. Condé Nast sent a cease-and-desist last week, demanding Perplexity stop scraping its website (no word on whether a cease-and-desist was also sent to Google, which does the same thing). There was also blowback from Forbes, which alleged Perplexity plagiarized its reporting in an AI-generated news story. Today, Perplexity announced a revenue-sharing deal with publishers to tame those conflicts as it races with OpenAI to change how we search for information online. 

Brands looking to ensure their content is included in these AI tools need to evolve their search optimization strategies to include so-called generative engine optimization (GEO). This means focusing on high-quality, well-structured content that AI can easily parse and use to provide accurate, authoritative responses. It also raises the importance of a strong online presence for any brand, including media, content, social, and more. Building relationships with AI platforms and understanding their algorithms will be crucial for brands aiming to maintain visibility and relevance as search tools are reimagined.

Elsewhere …

Tips and Tricks

🫡 Just say ‘Got it.’

What’s happening: Most AI tools cite growing context windows as tangible improvements for each new large language model they produce. Those context windows are measured in tokens (OpenAI says a token is roughly equal to four characters) and most of the top models can handle over 100,000 tokens. The longer your content, the more source material you feed into the AI tool, and the lengthier the AI tool’s response, the more tokens you spend. 

Why it matters: If you’re working on an extensive piece of content, or you’re feeding the tool a large document, you might start flirting with that context limit. Tools like Claude may also limit usage even before then if the website is getting a lot of traffic and using a lot of bandwidth. You can stretch your context window by limiting the AI tool’s response (where it makes sense).

In most cases, if you feed an asset into an AI tool it will summarize that document, webpage, or file. This isn’t always useful — if you’re giving it a transcript of a Zoom call you attended, for instance, you probably know what was said.

Try this: Sometimes those summaries can be really lengthy and don’t add to the content you’re trying to produce. If you don’t want to see that summary, include “just say ‘Got it.” at the end of your prompt.

Here’s my go-to prompt in those situations: “Read and analyze the attached document, and when you’re done, just say ‘Got it.’”

Quote of the Week

“AI search is going to become one of the key ways that people navigate the internet, and it’s crucial, in these early days, that the technology is built in a way that values, respects, and protects journalism and publishers. We look forward to partnering with OpenAI in the process, and creating a new way for readers to discover The Atlantic.”

— Nicholas Thompson, CEO of The Atlantic, in OpenAI’s announcement of SearchGPT