Nieman Lab.
Predictions for
Journalism, 2025.
In 1988, NASA scientist James Hansen told the U.S. Senate that man-made climate change was real, imminent, and potentially catastrophic. It was a perfect opportunity for the media to start covering the issue with the importance and urgency it deserved. Yet journalists did not rise to the occasion. For decades, they peddled both-sidesism, failing to take seriously the scientific consensus and not adequately discussing how to tackle the problem.
In recent years, newsrooms have begun to cover the climate crisis properly. But that delay has been costly: Journalism’s failure has undoubtedly led the public and policymakers to take the issue much less seriously than they should have. Had we been faster to heed Hansen’s warnings, perhaps we wouldn’t find ourselves in such a mess today.
I fear we may be making the same mistake with artificial general intelligence (AGI) — AI that can achieve or surpass human-level performance across a wide range of cognitive tasks.
Bring up AGI to most journalists and you’ll get an eyeroll and a scoff. The idea of imminent, human-level AI, many say, is just a marketing stunt dreamt up by tech executives. It’s a sci-fi fantasy — certainly not something to take seriously.
But talk to the people actually working at AI companies and you get a different story. For them, AGI isn’t a marketing ploy — it is, as former OpenAI board member Helen Toner recently told the Senate, “an entirely serious goal.” These people are sincerely trying to build computers that can do everything humans can do. And they expect to succeed soon: Many believe we’ll achieve this in the next two years, and even long-time skeptics like Meta’s Yann LeCun now think we’ll have AGI within a decade or two. Independent scientists and researchers, such as Yoshua Bengio and Nobel winner Geoffrey Hinton, agree.
They could be wrong; people have incorrectly thought AGI was imminent before. But given the astonishing rate of progress — and the fact that, as of this year, systems already surpass humans at a variety of PhD-level tasks — we ought to seriously consider the possibility that they are right, and that transformative AI is coming soon.
If so, the implications would be staggering. An AI system capable of doing everything a human can would allow for the automation of huge swathes of knowledge work. Thousands of human-level systems, working in parallel, could massively accelerate scientific progress. Economic growth could boom. Warfare could permanently change. The ramifications for democracy and geopolitics would be profound. And that’s before we even consider the catastrophic risks that many experts fear transformative AI could pose.
If such change is on the horizon, the public ought to be involved. But right now, almost everyone seriously engaging with the possibility of imminent AGI works at an AI company. In the absence of regulation, these companies are able to make unilateral decisions that will affect all of humanity. That is not acceptable.
Journalism must step up. Rather than treat AGI as a fringe concern, we must be proactive and ambitious: taking the possibility seriously, considering the implications, and starting a public, democratic conversation.
So what should newsrooms do? First, stop dismissively framing AGI discussions as marketing stunts. As Fortune’s Jeremy Kahn notes, assuming such talk is purely cynical marketing “may do readers a disservice.” Just because something sounds outlandish doesn’t mean it isn’t worth serious consideration.
Second, significantly expand AI coverage. At Tarbell, we consistently see important stories going uncovered due to resource constraints. If AGI is genuinely on the horizon, this may be the most consequential story of our time. It deserves commensurate attention and investment. We provide support with our fellowship and grants programs for AI reporting, but we can’t do it alone.
Finally, journalists ought to interrogate what a world with AGI could or should look like. We can examine how the police and military will use — or abuse — such technology. We can platform academic voices investigating AGI’s potential effects on inequality. And we can scrutinize whether governments are doing enough to avert the technology’s potentially catastrophic risks.
The window for journalism to rise to this challenge is still open, but it’s closing fast. In 2025, we’ll either see newsrooms step up to help society grapple with these questions, or we’ll watch as some of history’s most consequential decisions are made without adequate public scrutiny or debate. We can either repeat the mistakes of climate coverage, or make a concerted effort to do better.
The choice, and responsibility, is ours.
Shakeel Hashim is grants director at Tarbell and editor of Transformer.
In 1988, NASA scientist James Hansen told the U.S. Senate that man-made climate change was real, imminent, and potentially catastrophic. It was a perfect opportunity for the media to start covering the issue with the importance and urgency it deserved. Yet journalists did not rise to the occasion. For decades, they peddled both-sidesism, failing to take seriously the scientific consensus and not adequately discussing how to tackle the problem.
In recent years, newsrooms have begun to cover the climate crisis properly. But that delay has been costly: Journalism’s failure has undoubtedly led the public and policymakers to take the issue much less seriously than they should have. Had we been faster to heed Hansen’s warnings, perhaps we wouldn’t find ourselves in such a mess today.
I fear we may be making the same mistake with artificial general intelligence (AGI) — AI that can achieve or surpass human-level performance across a wide range of cognitive tasks.
Bring up AGI to most journalists and you’ll get an eyeroll and a scoff. The idea of imminent, human-level AI, many say, is just a marketing stunt dreamt up by tech executives. It’s a sci-fi fantasy — certainly not something to take seriously.
But talk to the people actually working at AI companies and you get a different story. For them, AGI isn’t a marketing ploy — it is, as former OpenAI board member Helen Toner recently told the Senate, “an entirely serious goal.” These people are sincerely trying to build computers that can do everything humans can do. And they expect to succeed soon: Many believe we’ll achieve this in the next two years, and even long-time skeptics like Meta’s Yann LeCun now think we’ll have AGI within a decade or two. Independent scientists and researchers, such as Yoshua Bengio and Nobel winner Geoffrey Hinton, agree.
They could be wrong; people have incorrectly thought AGI was imminent before. But given the astonishing rate of progress — and the fact that, as of this year, systems already surpass humans at a variety of PhD-level tasks — we ought to seriously consider the possibility that they are right, and that transformative AI is coming soon.
If so, the implications would be staggering. An AI system capable of doing everything a human can would allow for the automation of huge swathes of knowledge work. Thousands of human-level systems, working in parallel, could massively accelerate scientific progress. Economic growth could boom. Warfare could permanently change. The ramifications for democracy and geopolitics would be profound. And that’s before we even consider the catastrophic risks that many experts fear transformative AI could pose.
If such change is on the horizon, the public ought to be involved. But right now, almost everyone seriously engaging with the possibility of imminent AGI works at an AI company. In the absence of regulation, these companies are able to make unilateral decisions that will affect all of humanity. That is not acceptable.
Journalism must step up. Rather than treat AGI as a fringe concern, we must be proactive and ambitious: taking the possibility seriously, considering the implications, and starting a public, democratic conversation.
So what should newsrooms do? First, stop dismissively framing AGI discussions as marketing stunts. As Fortune’s Jeremy Kahn notes, assuming such talk is purely cynical marketing “may do readers a disservice.” Just because something sounds outlandish doesn’t mean it isn’t worth serious consideration.
Second, significantly expand AI coverage. At Tarbell, we consistently see important stories going uncovered due to resource constraints. If AGI is genuinely on the horizon, this may be the most consequential story of our time. It deserves commensurate attention and investment. We provide support with our fellowship and grants programs for AI reporting, but we can’t do it alone.
Finally, journalists ought to interrogate what a world with AGI could or should look like. We can examine how the police and military will use — or abuse — such technology. We can platform academic voices investigating AGI’s potential effects on inequality. And we can scrutinize whether governments are doing enough to avert the technology’s potentially catastrophic risks.
The window for journalism to rise to this challenge is still open, but it’s closing fast. In 2025, we’ll either see newsrooms step up to help society grapple with these questions, or we’ll watch as some of history’s most consequential decisions are made without adequate public scrutiny or debate. We can either repeat the mistakes of climate coverage, or make a concerted effort to do better.
The choice, and responsibility, is ours.
Shakeel Hashim is grants director at Tarbell and editor of Transformer.