Why I'm Optimistic About AI And Media In 2025

Photo by Steve Johnson on Unsplash
 

Happy New Year to everyone. And I'm convinced it will be happy, or at least on the road to better days for the media, with AI in the picture. That might seem counterintuitive, but with the first newsletter of the year I'm determined to start things on the right foot with a vision of how AI can alter content incentives for the better.

The Media Moves Fast. AI Moves Faster. Stay Ahead.
 

Knowledge Management: The Key to Thriving in the AI Era

The shift from search to AI summarization will be the biggest change in how people consume content since the rise of social media. While many see this as an existential threat to journalism, my experience working with newsrooms to adapt to AI has led me to a counterintuitive conclusion: AI could actually help restore the incentives of good journalism.

However, to realize that potential, media companies need to think in new and novel ways about how they manage their content — not just for their own AI initiatives, but for an ecosystem where AI slowly becomes a significant, if not the primary interface between publishers and readers.

Advising media companies and content creators on how to turn those thoughts into action is part of what I do through The Media Copilot. Long story short, much of it involves helping clients adapt strategy to incorporate the new practice of "knowledge management" — essentially priming content for AI.

What is involved, and why would you want to do that in the first place? Read on.
 

The Shift in Incentives

In a recent interview with the folks at The Gateway podcast at Northern Illinois University, I talked about the new incentives of media in an AI world. I get to the good part about two-thirds of the way through (41:25), where I point to the potential of AI summarization engines — think Perplexity, ChatGPT Search, and even publication-specific chatbots like Time AI — change user behavior. As more and more people shift from searching for links to getting AI summaries, the incentives of AI become the incentives of media. And if AI prioritizes comprehensive, fair, and original information — which it seems to — that aligns with good journalism.

As I mention in the podcast, the big caveat to all this is that the AI needs to be designed with the goals of journalism in mind — that it provides reliable information in the interest to create a better-informed public. That's easier said than done, of course. Even if your goals are in the right place, how large language models (LLMs) interact with content is a tricky business.
 

Knowledge Is Power

The concept of tailoring a corpus of content and tuning the AI that mines it is broadly called knowledge management, and it's key to getting the AI-journalism interaction right. Many publications have large archives, covering many topics that evolve over time. That inevitably leads to outdated or contradictory information, which can easily confuse AI and result in incorrect or unhelpful answers to reader queries.

While that's obviously not a good outcome, simply opting out of AI isn't really an option either. Knowledge management is something media companies need to prioritize and get right, regardless of their position on AI and chatbots. AI is simply an inevitable part of the media ecosystem. Even if any particular publication prevents AI bots from crawling its content, individuals will continue to use AI-lens tools (such as Google NotebookLM and Apple Intelligence) on all their content. Like it or not, AI summarization is here to stay.

That's why it's important for every publication to think about the interface layer between AI and their content. Some of the key things to consider in knowledge management:

1. Balancing safety

In media applications, AI safety is less about thumb-on-the-scale missteps like Google Gemini's infamous ahistorical photos (Black George Washington et al.) and more about guarding against both misinformation seeping into the experience and deliberate misuse. To do that, you need to create clear rules about what the AI will and won't answer. In the case of the Wall Street Journal's Joannabot,, editors ensured the bot wouldn't answer queries beyond its clear area of expertise — iPhone reviews.

2. Hallucinations:

Hallucinations appear to be inherent to large language models. Using AI with a set of articles is what's called a language task — where you "aim" AI at a corpus as opposed to asking it to draw on its knowledge base — which tend to have fewer hallucinations, but not zero. Even when the corpus is very small, such as the notifications in Apple Intelligence summaries, things can get weird:

There are ways to minimize hallucinations. AI systems can apply a layer of fact-checking to improve accuracy, and as reasoning models like OpenAI's o1 and o3 become more accessible, this will likely become more of a default with the technology. In any case, carefully choosing what goes into the corpus and weighting the content appropriately (more recent is almost always prioritized for example) is extremely important.

3. Clarity on goals:

Being able to converse with content is a brilliant innovation, but it's a feature, not an end unto itself. When creating AI experiences around content, it's important to think carefully about the user behavior you're trying to create. Are you trying to drive subscriptions to a newsletter, more transactions on commerce content, something else? Here's where the technology's limitations can be helpful: AI experiences tend to work better when they're created around a subset of content (say, evergreen guides).
 

The Benefits of Being AI-Ready

As I said before, knowledge management should be a part of content strategy even if the publication has no intention of creating a chatbot. Making your content "AI-ready" means it will also be better suited for indexing by third-party AI services. While much of the discussion about AI summaries over the past year has focused on content-licensing opportunities, an informational summary on a topic where your brand holds authority is also an audience opportunity.

There's precious little data about how discovery works through AI engines, but Google has said that, although click-through on an AI summary is lower than traditional search, those who do click through are much more inclined to engage with your content and become loyal readers. In short: AI summarization may be bad for traffic, but it's good for audience.

That's why I believe that, if a publication practices good knowledge management, it can thrive in the AI era. And there's a good chance it will be better for our information ecosystem than the previous era, where the incentives of search and social reigned supreme. The content that characterized that period — endless reblogging of bite-size news, hot takes crafted solely to provoke, abusing SEO authority for cheap evergreen content — put the values of journalism through a fun-house mirror.

An AI-first content strategy can prioritize the substantive over the superficial, the unique over the optimized, and the thoughtful over the provocative. AI isn't a panacea for everything that ails the media, but by adapting content strategy to its rules, newsrooms can use AI as a catalyst for better journalism.


More By This Author:

Finally, Some Sanity in AI Discourse
Here's What Creative Work Looks Like When AI Is The Content Director
OpenAi's Deal With News Corp Is A Turning Point

How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.
Or Sign in with