Nieman Foundation at Harvard
HOME
          
LATEST STORY
The media becomes an activist for democracy
ABOUT                    SUBSCRIBE
Feb. 2, 2023, 10 a.m.
Reporting & Production

Is text-generating AI an industry killer or just another wave of hype?

“There can potentially be massive shifts, benefits, and risks in many industries, but I cannot see a scenario where this is a ‘sky is falling’ kind of issue.”

If you have been reading all the hype about the latest artificial intelligence chatbot, ChatGPT, you might be excused for thinking that the end of the world is nigh.

The clever AI chat program has captured the imagination of the public for its ability to generate poems and essays instantaneously, its ability to mimic different writing styles, and its ability to pass some law and business school exams.

Teachers are worried students will use it to cheat in class (New York City public schools have already banned it). Writers are worried it will take their jobs (BuzzFeed and CNET have already started using AI to create content). The Atlantic declared that it could “destabilize white-collar work.” Venture capitalist Paul Kedrosky called it a “pocket nuclear bomb” and chastised its makers for launching it on an unprepared society.

Even the CEO of the company that makes ChatGPT, Sam Altman, has been telling the media that the worst-case scenario for AI could mean “lights out for all of us.”

But others say the hype is overblown. Meta’s chief AI scientist, Yann LeCun, told reporters ChatGPT was “nothing revolutionary.” University of Washington computational linguistics professor Emily Bender warns that “the idea of an all-knowing computer program comes from science fiction and should stay there.”

So, how worried should we be? For an informed perspective, I turned to Princeton computer science professor Arvind Narayanan, who is currently co-writing a book on “AI snake oil.” In 2019, Narayanan gave a talk at MIT called “How to recognize AI snake oil” that laid out a taxonomy of AI from legitimate to dubious. To his surprise, his obscure academic talk went viral, and his slide deck was downloaded tens of thousands of times; his accompanying tweets were viewed more than two million times.

Narayanan then teamed up with one of his students, Sayash Kapoor, to expand the AI taxonomy into a book. Last year, the pair released a list of 18 common pitfalls committed by journalists covering AI. (Near the top of the list: illustrating AI articles with cute robot pictures. The reason: anthropomorphizing AI incorrectly implies that it has the potential to act as an agent in the real world.)

Narayanan is also a co-author of a textbook on fairness and machine learning and led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use personal information. He is a recipient of the White House’s Presidential Early Career Award for Scientists and Engineers.

Our conversation, edited for brevity and clarity, is below.

Angwin: You have called ChatGPT a “bullshit generator.” Can you explain what you mean?

Narayanan: Sayash Kapoor and I call it a bullshit generator, as have others as well. We mean this not in a normative sense but in a relatively precise sense. We mean that it is trained to produce plausible text. It is very good at being persuasive, but it’s not trained to produce true statements. It often produces true statements as a side effect of being plausible and persuasive, but that is not the goal.

This actually matches what the philosopher Harry Frankfurt has called bullshit, which is speech that is intended to persuade without regard for the truth. A human bullshitter doesn’t care if what they’re saying is true or not; they have certain ends in mind. As long as they persuade, those ends are met. Effectively, that is what ChatGPT is doing. It is trying to be persuasive, and it has no way to know for sure whether the statements it makes are true or not.

Angwin: What are you most worried about with ChatGPT?

Narayanan: There are very clear, dangerous cases of misinformation we need to be worried about. For example, people using it as a learning tool and accidentally learning wrong information, or students writing essays using ChatGPT when they’re assigned homework. I learned recently that CNET has been, for several months now, using these generative AI tools to write articles. Even though they claimed that the human editors had rigorously fact-checked them, it turns out that’s not been the case. CNET has been publishing articles written by AI without proper disclosure, as many as 75 articles, and some turned out to have errors that a human writer would most likely not have made. This was not a case of malice, but this is the kind of danger that we should be more worried about where people are turning to it because of the practical constraints they face. When you combine that with the fact that the tool doesn’t have a good notion of truth, it’s a recipe for disaster.

Angwin: You have developed a taxonomy of AI where you describe different types of technologies that all fall under this umbrella of AI. Can you tell us where ChatGPT fits into this taxonomy?

Narayanan: ChatGPT is part of the generative AI category. Technologically, it’s pretty similar to text-to-image models, like DALL-E [which creates images based on text instructions from a user]. They are related to AI that’s used for perception tasks. This type of AI uses what’s called deep learning models. About a decade ago, computer vision technologies started to get good at distinguishing between a cat and a dog, something people can do very easily.

What’s been different in the last five years is that, because of a new technology called transformers and other related technologies, computers have gotten good at reversing the perception task of identifying a cat or dog. This means that, given text prompts, they can actually generate a plausible image of a cat or a dog or even fanciful things like an astronaut riding a horse. The same thing is happening with text: Not only are models taking a piece of text and classifying it, but given a prompt, these models can essentially run classification in reverse and produce plausible text that might fit into the category given.

Angwin: Another category of AI you discuss is automating judgment. Can you tell us what this includes?

Narayanan: I think the best example of automating judgment is content moderation on social media. It is clearly imperfect; there have been so many notable failures of content moderation, many with deadly consequences. Social media has been used to incite violence, even perhaps genocidal violence in many parts of the world, including in Myanmar, Sri Lanka, and Ethiopia. These were all failures of content moderation, including content moderation AI.

However things are improving. It is possible, at least to some degree, to take the work of human content moderators and train models to make those judgments about whether an image represents nudity or hate speech. There will always be inherent limitations, but content moderation is a dreadful job. It’s a job that’s filled with the trauma of looking at images of gore and beheadings and all kinds of horrible things day in and day out. If AI can minimize the human labor, that’s a good thing.

I think there are certain aspects of the content moderation process that should not be automated. Deciding the line between acceptable and unacceptable speech is time-consuming. It’s messy. It needs to involve input from civil society. It’s constantly shifting and culture-specific. And it needs to be done for every possible type of speech. Because of all that, AI has no role here.

Angwin: Another category of AI that you describe is one that aims to predict social outcomes. You are skeptical of this type of AI. Why?

Narayanan: This is the kind of AI where decision-makers predict what someone might do in the future and use that to make decisions about them, often to preclude certain paths. It’s used in hiring, it’s famously used in criminal-risk prediction. It’s also used in contexts where the intent is to help someone. For example, this person is at risk of dropping out of college; let’s intervene and suggest that they switch to a different major.

What all of these have in common is statistical predictions based on rough patterns and correlations in the data about what a person might do. These predictions are then used to some degree to make decisions about them, and in many cases, deny them certain opportunities, limit their autonomy, and take away the opportunity for them to prove themselves and show they’re not defined by statistical patterns. There are many fundamental reasons why we might want to consider most of these AI applications to be illegitimate and morally impermissible.

When an intervention is made based on a prediction, we need to ask, “Is that the best decision we can make? Or is the best decision one that doesn’t correspond to a prediction at all?” For instance, in the criminal-risk prediction scenario, the decision that we make based on predictions is to deny bail or parole, but if we move out of the predictive setting, we might ask, “What is the best way to rehabilitate this person into society and decrease the chance that they will commit another crime?” It opens up the possibility of a much wider set of interventions.

Angwin: Some people are warning of a ChatGPT “doomsday,” with lost jobs and the devaluing of knowledge. What is your take?

Narayanan: Assume that some of the wildest predictions about ChatGPT are true and it will automate entire job categories. By way of analogy, think about the most profound information technology developments of the last few decades, like the internet and smartphones. They have reshaped entire industries, but we’ve learned to live with them. Some jobs have gotten more efficient. Some jobs have been automated, so people have retrained themselves, or shifted careers. There are some harmful effects of these technologies, but we’re learning to regulate them.

Even with something as profound as the internet or search engines or smartphones, it’s turned out to be an adaptation, where we maximize the benefits and try to minimize the risks, rather than some kind of revolution. I don’t think large language models are even on that scale. There can potentially be massive shifts, benefits, and risks in many industries, but I cannot see a scenario where this is a “sky is falling” kind of issue.

Illustration created using (of course) Midjourney’s AI.

POSTED     Feb. 2, 2023, 10 a.m.
SEE MORE ON Reporting & Production
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
The media becomes an activist for democracy
“We cannot be neutral about this, by definition. A free press that doesn’t agitate for democracy is an oxymoron.”
Embracing influencers as allies
“News organizations will increasingly rely on digital creators not just as amplifiers but as integral partners in storytelling.”
Action over analysis
“We’ve overindexed on problem articulation, to the point of problem admiring. The risk is that we are analyzing ourselves into inaction and irrelevance.”