Nieman Lab.
Predictions for
Journalism, 2025.
The hype wave around AI has peaked (or close to it), but its impact on the industry hasn’t yet been felt. I don’t predict that AI will take us “boldly where no one has gone before.” Instead, it will help us revisit old territory, digging deeper into abandoned wells to uncover untapped value, hidden insights, and opportunities once deemed impractical. What are some pivots, once deemed infeasible, that might be worth a second look using new technology?
Remember the early days of the web? Did you use Ask Jeeves, Lycos, or AltaVista? Before Google became the dominant player, search was a competitive space.
Some of the best AI projects in 2024 were RAG-powered (retrieval-augmented generation) lookups. Examples include the San Francisco Chronicle’s Kamala Harris News Assistant, The Washington Post’s Ask the Post AI (which followed its Climate Answers bot), and Business Insider’s recently released AI-powered BI search.
What makes all of these efforts unique is the dataset that underpins them. News organizations that have or can create unique datasets now have the special ingredients to power a niche search engine that can compete with Google on that topic. Newspapers once dreamed of being the front page of the internet. That never materialized, but if they can create a “niche search” experience that is valuable enough, they could be the go-to source again on certain topics. If these niche search engines succeed, when you have a question, you won’t reflexively “Google it.”
There was a time when many news stories had two people involved: the reporter and the writer. The reporter was the stereotypical fast-talking journalist who knew everyone at City Hall. They’d get the scoop and call it in over the phone to the writer. The writer, trained on the keyboard, had a way with words and could put together the story lickety-split. Often the reporter couldn’t write their way out of a paper bag and the writer didn’t know the first thing on how to coax information out of a source. They were a pair.
In modern times, the two roles have become one; many journalists even refer to themselves as “writers” first. What happens when the cost to produce content (the text, audio, video, etc.) drastically decreases? Many journalists fear losing their jobs due to this drop in production costs — to which I have two responses. The first response is that there is still another half of the original role of “journalist” that remains integral and relatively untouched by AI. This is the “return of the reporter,” who can get the scoop with good old-fashioned shoe-leather reporting — by finding something new.
In coming years, journalists will need to intensify and refocus on their ability to discover the “new,” build relationships, understand community trends, and more. They’ll increasingly make their mark with an ability to identify and find new information, rather than how well they can put a string of words together.
The height of hyperlocal may have come around 2008, when Patch was the largest attempt at a network of hyperlocal newsrooms. Ultimately, it was unsustainable. While Patch wasn’t beloved by many in the industry, it was the lack of ROI that ultimately dug its grave.
Today, one of the biggest concerns about AI in newsrooms is that it will destroy jobs (see above). But, counterintuitively, AI could actually create more jobs, because the cost of opening up new newsrooms will drop.
News deserts exist because covering certain areas is not economically feasible. AI could lower the cost, allowing teams of one to three people to successfully cover an area without additional or back office support. In this case, perhaps, hyperlocal becomes ascendant. Will it be a network, à la Patch? I don’t think so. But we are already seeing small independent “newsrooms” that leverage AI to bring the cost of covering an area down.
Perhaps the loss of jobs will be at centralized locations with bloated numbers of staffers covering saturated national issues. If that loss is met with growth in distributed areas, letting journalists focus on being reporters and creating niche data sets, the net gain will be positive.
From around 2004 to 2012, “citizen journalism” or “engagement journalism” or anything that made the process of journalism more transparent and participatory was all the rage. There was a deep practical and philosophical push to recognize the audience (the people formerly known as the audience), acknowledging that they often knew more than we did. Much of the fervor of this movement was absorbed by social media, but the ethos has lived on under other names and spaces.
One of the biggest hurdles of the citizen journalism movement was the bottleneck of dealing with too much information, combined with a sense of customer service or meaningful engagements that wouldn’t scale without technology. It’s one thing to ask the audience for their input and insight and another to wade through it all to find meaning, while letting audience members understand their role in the process.
In 2025, we have an increasingly accessible technology designed to ingest copious amounts of information and devise meaning from it. Just as the cost required to make hyperlocal work has dropped, so has the mental cost required to sift through instances of audience participation, understand an ongoing conversation, and find diamonds in the rough of user feedback. Our ability to do this can meaningfully improve journalism and create paths for the audience to feel connected to our work.
We’ve seen the various AI startups that take articles and turn them into videos.
I don’t think this pivot will take. I generally believe homogeneous video is not defensible as a content strategy. Will it be cheaper to make videos? Yes. Will those videos cross the uncanny valley and be unique enough for brand affinity to be built on top of them? Probably not.
The AI video creation space is amazing. I’m less enthusiastic about the idea that your average article needs a video asset. Let’s see what Hollywood does with the AI-to-TikTok pipeline before we jump in.
Circa was a unique project and experiment that, while relatively modest in reach — especially 13 years after its launch — is still a groundbreaking example of rethinking how news and information can be structured.
The fundamental idea was to break stories down into “atomic units” that could be threaded together to tell an ongoing story. One could “follow” a story like you follow somebody on TwitteX-thread-sky-adon-hive-truths™. The ex-Circa folks I’ve spoken with agree that the LLM breakthroughs of today would have gotten us much closer to the fundamental vision of Circa.
Folks are starting to poke at its edges — Particle News, among others. Nobody, however, has gone yet for the “end game” of Circa, which was not to produce summaries but to change the nature of how news is organized, closer to an evolving database of information than a series of articles. See my entry above about databases, then imagine how an “article” representation of those databases could reshape every time there was new information from the world or the user.
Articles remain the base unit of information, but inside articles are smaller units of information waiting to be plucked out and re-organized to tell stories in a different way.
A bit of a repeat from my prediction last year (“I got 99 predictions but AI ain’t one”): As cheap and dubious content spreads online, print content, which is expensive to produce, will be seen as more trustworthy. The work required to report, publish, and distribute print media proves an effort — and therefore value — that is harder to fake. I call this “Proof of Trust,” inspired by Bitcoin’s “Proof of Work.”
Do I think print will rise like a phoenix from the ashes? No. Could there be a pivot, though? It’s happened before.
David Cohn is senior director of research and development for content at Advance Local.
The hype wave around AI has peaked (or close to it), but its impact on the industry hasn’t yet been felt. I don’t predict that AI will take us “boldly where no one has gone before.” Instead, it will help us revisit old territory, digging deeper into abandoned wells to uncover untapped value, hidden insights, and opportunities once deemed impractical. What are some pivots, once deemed infeasible, that might be worth a second look using new technology?
Remember the early days of the web? Did you use Ask Jeeves, Lycos, or AltaVista? Before Google became the dominant player, search was a competitive space.
Some of the best AI projects in 2024 were RAG-powered (retrieval-augmented generation) lookups. Examples include the San Francisco Chronicle’s Kamala Harris News Assistant, The Washington Post’s Ask the Post AI (which followed its Climate Answers bot), and Business Insider’s recently released AI-powered BI search.
What makes all of these efforts unique is the dataset that underpins them. News organizations that have or can create unique datasets now have the special ingredients to power a niche search engine that can compete with Google on that topic. Newspapers once dreamed of being the front page of the internet. That never materialized, but if they can create a “niche search” experience that is valuable enough, they could be the go-to source again on certain topics. If these niche search engines succeed, when you have a question, you won’t reflexively “Google it.”
There was a time when many news stories had two people involved: the reporter and the writer. The reporter was the stereotypical fast-talking journalist who knew everyone at City Hall. They’d get the scoop and call it in over the phone to the writer. The writer, trained on the keyboard, had a way with words and could put together the story lickety-split. Often the reporter couldn’t write their way out of a paper bag and the writer didn’t know the first thing on how to coax information out of a source. They were a pair.
In modern times, the two roles have become one; many journalists even refer to themselves as “writers” first. What happens when the cost to produce content (the text, audio, video, etc.) drastically decreases? Many journalists fear losing their jobs due to this drop in production costs — to which I have two responses. The first response is that there is still another half of the original role of “journalist” that remains integral and relatively untouched by AI. This is the “return of the reporter,” who can get the scoop with good old-fashioned shoe-leather reporting — by finding something new.
In coming years, journalists will need to intensify and refocus on their ability to discover the “new,” build relationships, understand community trends, and more. They’ll increasingly make their mark with an ability to identify and find new information, rather than how well they can put a string of words together.
The height of hyperlocal may have come around 2008, when Patch was the largest attempt at a network of hyperlocal newsrooms. Ultimately, it was unsustainable. While Patch wasn’t beloved by many in the industry, it was the lack of ROI that ultimately dug its grave.
Today, one of the biggest concerns about AI in newsrooms is that it will destroy jobs (see above). But, counterintuitively, AI could actually create more jobs, because the cost of opening up new newsrooms will drop.
News deserts exist because covering certain areas is not economically feasible. AI could lower the cost, allowing teams of one to three people to successfully cover an area without additional or back office support. In this case, perhaps, hyperlocal becomes ascendant. Will it be a network, à la Patch? I don’t think so. But we are already seeing small independent “newsrooms” that leverage AI to bring the cost of covering an area down.
Perhaps the loss of jobs will be at centralized locations with bloated numbers of staffers covering saturated national issues. If that loss is met with growth in distributed areas, letting journalists focus on being reporters and creating niche data sets, the net gain will be positive.
From around 2004 to 2012, “citizen journalism” or “engagement journalism” or anything that made the process of journalism more transparent and participatory was all the rage. There was a deep practical and philosophical push to recognize the audience (the people formerly known as the audience), acknowledging that they often knew more than we did. Much of the fervor of this movement was absorbed by social media, but the ethos has lived on under other names and spaces.
One of the biggest hurdles of the citizen journalism movement was the bottleneck of dealing with too much information, combined with a sense of customer service or meaningful engagements that wouldn’t scale without technology. It’s one thing to ask the audience for their input and insight and another to wade through it all to find meaning, while letting audience members understand their role in the process.
In 2025, we have an increasingly accessible technology designed to ingest copious amounts of information and devise meaning from it. Just as the cost required to make hyperlocal work has dropped, so has the mental cost required to sift through instances of audience participation, understand an ongoing conversation, and find diamonds in the rough of user feedback. Our ability to do this can meaningfully improve journalism and create paths for the audience to feel connected to our work.
We’ve seen the various AI startups that take articles and turn them into videos.
I don’t think this pivot will take. I generally believe homogeneous video is not defensible as a content strategy. Will it be cheaper to make videos? Yes. Will those videos cross the uncanny valley and be unique enough for brand affinity to be built on top of them? Probably not.
The AI video creation space is amazing. I’m less enthusiastic about the idea that your average article needs a video asset. Let’s see what Hollywood does with the AI-to-TikTok pipeline before we jump in.
Circa was a unique project and experiment that, while relatively modest in reach — especially 13 years after its launch — is still a groundbreaking example of rethinking how news and information can be structured.
The fundamental idea was to break stories down into “atomic units” that could be threaded together to tell an ongoing story. One could “follow” a story like you follow somebody on TwitteX-thread-sky-adon-hive-truths™. The ex-Circa folks I’ve spoken with agree that the LLM breakthroughs of today would have gotten us much closer to the fundamental vision of Circa.
Folks are starting to poke at its edges — Particle News, among others. Nobody, however, has gone yet for the “end game” of Circa, which was not to produce summaries but to change the nature of how news is organized, closer to an evolving database of information than a series of articles. See my entry above about databases, then imagine how an “article” representation of those databases could reshape every time there was new information from the world or the user.
Articles remain the base unit of information, but inside articles are smaller units of information waiting to be plucked out and re-organized to tell stories in a different way.
A bit of a repeat from my prediction last year (“I got 99 predictions but AI ain’t one”): As cheap and dubious content spreads online, print content, which is expensive to produce, will be seen as more trustworthy. The work required to report, publish, and distribute print media proves an effort — and therefore value — that is harder to fake. I call this “Proof of Trust,” inspired by Bitcoin’s “Proof of Work.”
Do I think print will rise like a phoenix from the ashes? No. Could there be a pivot, though? It’s happened before.
David Cohn is senior director of research and development for content at Advance Local.