Nieman Lab.
Predictions for
Journalism, 2024.
You know things are getting weird when people’s grandparents are playing with technology that over half the population thinks is bad news and many think is the beginning of the end of humanity. Not Facebook. ChatGPT.
But is generative AI bad news for…the news? It feels too frothy of a tech sector to tell yet. Even high quality journalism seems to teeter between stuff that feels like fear-mongering and deeper understanding of what the combination of computing power and natural language processing might portend for the business. For those already on the ropes, our robot overlords are not welcome guests. And if you want evidence of whether the ambivalence we journalists feel is warranted, just ask an objective source:
I hate to admit it, but generally speaking, this is right. Whether you work as an illustrator, a journalist, or in a non-creative field, you’re probably dithering between two possible futures best described in Terminator 2. Is AI Arnold Schwarzenegger’s T-800, reaching out his hand and saying “Come with me if you want to live”? Or is AI the terminator who is walking up to us in the biker bar, saying “I need your clothes, your boots, and your motorcycle” before taking all three by force? The correct answer is…well…both, in the movie and real life.
In an audio news business that is struggling, efficiencies must be found. But…does this mean our chatbots will become reporter-bots? Host-bots? Anchor-bots? Asking the oracle gives more answers that, if accurate, are inconclusive, even if the skeptic might assume some techno-optimism that is hard-wired.
The question about whether AI-borne efficiencies will actually level up our journalism isn’t new. Machines have been writing stories for us since the stock market became a place where by-the-minute plays could make millions. When we think about stories and deep impact, we usually think about the hard stuff, the long stuff, the stuff we can’t make by “finding efficiencies.”
Without breaking any news from within my own deskpod, I can tell you that at shops like WBUR, which has a union, the conversation of what to do with generative AI is very active, with editorial leadership and shop stewards working closely together to research and shape station guidelines. Much of our org’s conversation is about how we can use AI to help offset the growing number of tasks surrounding reporting and production that can feel like busywork. If you need 20 fields filled out to post a story on your website, and only one of them is where you craft and produce the actual journalism, might AI help? People are already using it to clean up archival tape.
But I’ve also been involved in conversations with colleagues who are thinking about distinguishing ourselves by opting out of AI in key areas. Not as a job-saving measure, but a brand-saving one. And as the ever-in-flux media business continues to evolve, I think we’re going to see more organizations define their identity by not opting into AI, and communicating that to their audience.
There are definitely exceptions to that prediction for high-quality news, and some of them hold promise. The Planet Money Bot, built by some of the NPR show’s key players and other tech-forward collaborators, is an interesting example. Unlike so many models that are trained on billions of data points from the trash heap of knowledge and toxicity of the internet, the bot is trained on a limited data set: Planet Money episodes. And it’s not designed to replace jobs as much as it is to provide actual quality information and foster discovery of quality audio journalism. We’ll likely see more of these kinds of limited-batch language models in the coming years. The question is whether they’re designed to replace, or elevate, our work.
So as the tech world continues to desperately throw money at the concept and hardware of AI, and some tech world leaders poo-poo the near-term impact, those of us at today’s versions of the looms and the plows — which is to say, any job creative or otherwise threatened by the latest leap in mechanization — have to white-knuckle it and hope the economists are wrong about who gets all the money. Maybe we’ll all eventually be on some permanent mission of self-improvement while the machines make us all filthy rich?
One of the largest forks in the road for AI has always been whether we build intelligent machines to be our partners in work, or whether we build them in our likeness. The latter is more sexy, even when it’s snake oil, and unfortunately this is the path we seem to have chosen. It may not actually be the most helpful, and if that’s the case, many of us may view AI-generated content, media or otherwise, the same way we currently view scam bots, junk mail, and robocalls. It may be the stuff we have to sift through in an informational trash heap to find the things we actually want — including quality journalism that distinguishes itself by continuing to be made well by humans.
Ben Brock Johnson is the executive producer of podcasts at WBUR.
You know things are getting weird when people’s grandparents are playing with technology that over half the population thinks is bad news and many think is the beginning of the end of humanity. Not Facebook. ChatGPT.
But is generative AI bad news for…the news? It feels too frothy of a tech sector to tell yet. Even high quality journalism seems to teeter between stuff that feels like fear-mongering and deeper understanding of what the combination of computing power and natural language processing might portend for the business. For those already on the ropes, our robot overlords are not welcome guests. And if you want evidence of whether the ambivalence we journalists feel is warranted, just ask an objective source:
I hate to admit it, but generally speaking, this is right. Whether you work as an illustrator, a journalist, or in a non-creative field, you’re probably dithering between two possible futures best described in Terminator 2. Is AI Arnold Schwarzenegger’s T-800, reaching out his hand and saying “Come with me if you want to live”? Or is AI the terminator who is walking up to us in the biker bar, saying “I need your clothes, your boots, and your motorcycle” before taking all three by force? The correct answer is…well…both, in the movie and real life.
In an audio news business that is struggling, efficiencies must be found. But…does this mean our chatbots will become reporter-bots? Host-bots? Anchor-bots? Asking the oracle gives more answers that, if accurate, are inconclusive, even if the skeptic might assume some techno-optimism that is hard-wired.
The question about whether AI-borne efficiencies will actually level up our journalism isn’t new. Machines have been writing stories for us since the stock market became a place where by-the-minute plays could make millions. When we think about stories and deep impact, we usually think about the hard stuff, the long stuff, the stuff we can’t make by “finding efficiencies.”
Without breaking any news from within my own deskpod, I can tell you that at shops like WBUR, which has a union, the conversation of what to do with generative AI is very active, with editorial leadership and shop stewards working closely together to research and shape station guidelines. Much of our org’s conversation is about how we can use AI to help offset the growing number of tasks surrounding reporting and production that can feel like busywork. If you need 20 fields filled out to post a story on your website, and only one of them is where you craft and produce the actual journalism, might AI help? People are already using it to clean up archival tape.
But I’ve also been involved in conversations with colleagues who are thinking about distinguishing ourselves by opting out of AI in key areas. Not as a job-saving measure, but a brand-saving one. And as the ever-in-flux media business continues to evolve, I think we’re going to see more organizations define their identity by not opting into AI, and communicating that to their audience.
There are definitely exceptions to that prediction for high-quality news, and some of them hold promise. The Planet Money Bot, built by some of the NPR show’s key players and other tech-forward collaborators, is an interesting example. Unlike so many models that are trained on billions of data points from the trash heap of knowledge and toxicity of the internet, the bot is trained on a limited data set: Planet Money episodes. And it’s not designed to replace jobs as much as it is to provide actual quality information and foster discovery of quality audio journalism. We’ll likely see more of these kinds of limited-batch language models in the coming years. The question is whether they’re designed to replace, or elevate, our work.
So as the tech world continues to desperately throw money at the concept and hardware of AI, and some tech world leaders poo-poo the near-term impact, those of us at today’s versions of the looms and the plows — which is to say, any job creative or otherwise threatened by the latest leap in mechanization — have to white-knuckle it and hope the economists are wrong about who gets all the money. Maybe we’ll all eventually be on some permanent mission of self-improvement while the machines make us all filthy rich?
One of the largest forks in the road for AI has always been whether we build intelligent machines to be our partners in work, or whether we build them in our likeness. The latter is more sexy, even when it’s snake oil, and unfortunately this is the path we seem to have chosen. It may not actually be the most helpful, and if that’s the case, many of us may view AI-generated content, media or otherwise, the same way we currently view scam bots, junk mail, and robocalls. It may be the stuff we have to sift through in an informational trash heap to find the things we actually want — including quality journalism that distinguishes itself by continuing to be made well by humans.
Ben Brock Johnson is the executive producer of podcasts at WBUR.