Nieman Foundation at Harvard
HOME
          
LATEST STORY
ChatGPT is hallucinating fake links to its news partners’ biggest investigations
ABOUT                    SUBSCRIBE
May 30, 2024, 11:41 a.m.
LINK: restofworld.org  ➚   |   Posted by: Joshua Benton   |   May 30, 2024

Last week, the technology site Rest of World published a story on the apparent failings of WhatsApp misinformation “tip lines” in India. These are services that allow Indians to forward potential fake news and get information on its veracity. (We linked to it in our daily newsletter on Tuesday.) Here’s an excerpt:

Nearly a billion Indians are expected to vote in the ongoing elections, where misinformation and disinformation are among the biggest threats to the country’s political and social fabric, according to the World Economic Forum.

To combat this challenge, several fact-checkers, journalists, and even social media companies have launched or rebooted dozens of fact-checking helplines over WhatsApp, locally called tip lines. A user can verify content including audio, videos, and photos by sending it to the tip line’s WhatsApp number…

Rest of World tested 11 prominent tip lines by running 15 pieces of content through them. The content included 10 viral AI-generated videos — known to carry election-related misinformation — and five real videos that had been edited to mislead voters. Most of the tip lines struggled to provide a conclusive answer. Those that did were often inconsistent in their responses to local language content. Some tip lines took up to two days to generate an answer.

Nearly all the tip lines first sent automated bot responses saying “no fact-checks found” and passed on the query to human fact-checkers. The tip lines did not provide a follow-up response 85% of the time.

At one level, it shouldn’t be surprising that something as complicated as automated AI fact-checking — in up to 10 different languages! in a country with nearly 1 billion eligible voters! — is far from perfect. And in Rest of World’s count, the tip lines generated more correct (16) than incorrect (4) responses; the real problems were the 145 times when they generated no response at all.

But it didn’t take long for people involved in the tip lines to raise questions about the reporting. The Misinformation Combat Alliance (“a cross-industry alliance bringing companies, organizations, institutions, industry associations and government entities together to combat misinformation and fake news”) said the story was “riddled with inaccuracies and mischaracterizations” and published a lengthy response, signed by representatives of 13 organizations. Among the concerns they raised:

  • Rest of World’s numbers were wrong. “Our tipline dashboards and chat history indicate a significantly higher response rate than what is quoted in the story.”
  • The reporters blasted the tip lines so much they triggered spam responses. “For instance, we received multiple identical queries asking, ‘Can you verify this?’ within a short timeframe. In one instance, 11 queries were sent within 7 minutes without following the established flow.”
  • Rest of World didn’t reach out to all 11 tip lines for comment and, when they did, they gave “less than 24 hours to respond to exceedingly vague requests.”

Ruby Dhingra, the managing editor of Newschecker (a member of The Misinformation Combat Alliance and signee of its blog post), wrote in a separate post that Rest of World also misunderstood what the tiplines are meant to do: “While some automation is used, it was never intended for users to receive automated and immediate replies for all queries, a point emphasized by the experts in the article but ignored by the RoW team.”

One of the MCA’s initiatives, the Deepfakes Analysis Unit, offered additional detail in a thread.

Yesterday, Rest of World retracted the article, saying it had “determined that the piece did not meet our editorial standards for accuracy and journalistic integrity.” Here’s Anup Kaphle, Rest of World’s editor-in-chief:

As part of our reporting for this story, Rest of World ran an experiment by sending AI content through a number of verification tools built by fact-checking organizations, and called out repeated failures to identify “fake” material. Rest of World failed to reach out to all of the organizations named in the story to account for, or respond to, perceived lapses in their fact-checking systems. As a result, we failed to identify potential flaws in our own reporting methodology. After publication, a number of the fact-checking organizations featured in the story noted that our queries were flagged as spam by their tip lines. Our story also contained inaccuracies about the number of queries sent to the tip lines, the number of accurate responses, and the response time from the tip lines.

We deeply regret any harm that our reporting has caused. Rest of World is dedicated to upholding the highest editorial standards, and we take full responsibility for the oversight. Our commitment to our readers is to provide accurate, reliable, and thoroughly verified news. We fell short of this commitment.

For the curious, you can still read the original version in the Wayback Machine.

Photo taken April 12, 2024, of campaign advertising for Navneet Kaur Rana, a BJP candidate in Amravati Lok Sabha constituency, byGanesh Dhamodkar used under a Creative Commons license.

Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
ChatGPT is hallucinating fake links to its news partners’ biggest investigations
Nieman Lab’s tests show ChatGPT is directing users to broken URLs for at least 10 publications with OpenAI licensing deals.
El País aims for the U.S. with a new, American Spanish-language edition
“The best reader is the one who reads you a lot.”
Is journalism’s trust problem about money, not politics?
The people we spoke with tended to assume that news organizations made money primarily through advertising instead of also from subscribers.