Nieman Foundation at Harvard
HOME
          
LATEST STORY
Two-thirds of news influencers are men — and most have never worked for a news organization
ABOUT                    SUBSCRIBE
March 15, 2019, 10:25 a.m.
Audience & Social

A European movement encourages Facebook and Twitter to contact every person who has seen fake news

Plus: Pro-China accounts on Reddit, and same-day election misinformation.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Yellow vests, fake news, wide spread. The global activist organization Avaaz released a report on how fake news has spread on Facebook around France’s Yellow Jacket anti-government movement. Avaaz found that disinformation in Yellow Vest Facebook groups and pages received over 105 million views and 4 million shares between November 1 and March 6. The report provides case studies of some of the most viral fake stories. Facebook’s fact-checkers have corrected some, but not others.

On YouTube, Russian state media outlet RT France’s channel was “the most viewed channel on videos related to the Yellow Vests in France and accumulated more than twice as many views [as] Le Monde, L’Obs, FRANCE 24, Le Figaro, LeHuffPost combined.”

RT France has massively invested in coverage of the Yellow Vest protests, including hour-long live coverage videos, and as a result, dominated the debate about Yellow Vests on YouTube in France more than any other YouTube channel, let alone mainstream media. RT has permeated the movement to the extent that protesters at one point chanted “Thank you, RT! Thank you, RT!” which the global RT network posted directly onto their YouTube channel.

Avaaz’s proposed solution: “Platforms must themselves work with fact-checkers to ‘Correct the Record’ by distributing independent third party corrections to EVERY SINGLE person who saw the false information in the first place.” The “Correct the Record” measure has gained some support: Damian Collins, the chair of U.K. Parliament’s Digital, Culture, Media, and Sport Committee, told Time last month: “Support for the Correct the Record measure highlights the growing concern about disinformation on social media and the threat this poses to our democracy. The big tech platforms should do more to act against known sources of disinformation and to warn users when they may have been exposed to it. We know they have the technical capability to do this.”

Pro-China trolls on Reddit. BuzzFeed News’s Craig Silverman and Jane Lytvynenko examine the possibility that pro-China accounts are coordinating activity on Reddit.

Sir Tim on the web’s ills. This week marked the 30th anniversary of the original proposal for the World Wide Web, which Tim Berners-Lee gave his boss at CERN on March 12, 1989. (We paid homage to the very first website to note the occasion.) Berners-Lee wrote an essay on what he sees as the web’s (and the Internet’s) most significant ills today:

Against the backdrop of news stories about how the web is misused, it’s understandable that many people feel afraid and unsure if the web is really a force for good. But given how much the web has changed in the past 30 years, it would be defeatist and unimaginative to assume that the web as we know it can’t be changed for the better in the next 30. If we give up on building a better web now, then the web will not have failed us. We will have failed the web.

To tackle any problem, we must clearly outline and understand it. I broadly see three sources of dysfunction affecting today’s web:

  • Deliberate, malicious intent, such as state-sponsored hacking and attacks, criminal behaviour, and online harassment.
  • System design that creates perverse incentives where user value is sacrificed, such as ad-based revenue models that commercially reward clickbait and the viral spread of misinformation.
  • Unintended negative consequences of benevolent design, such as the outraged and polarised tone and quality of online discourse.

While the first category is impossible to eradicate completely, we can create both laws and code to minimize this behaviour, just as we have always done offline. The second category requires us to redesign systems in a way that change incentives. And the final category calls for research to understand existing systems and model possible new ones or tweak those we already have.

Fake news publishers adjust to Facebook’s fact-checking efforts. At Poynter, Daniel Funke has a piece outlining how some hoax or fake news publishers react when their primary source of distribution flags their stories — sometimes deleting or changing them.

In Factcheck.org’s article, Angelo Fichera reported that the false image, which purports to depict Ocasio-Cortez at a restaurant called Hot Dog on a Stick, was originally published Feb. 9 by the page America’s Last Line of Defense. The original post has since been deleted — and it isn’t the first time the page has done so.

America’s Last Line of Defense is operated by notorious hoaxer Christopher Blair, who publishes satirical content with the goal of tricking conservatives into sharing it. His popular false stories and images have become a major target for American fact-checkers.

Longtime Blair collaborator John Prager told Poynter in August that the reach of all their sites — which, at one point, were racking up about 1 million pageviews per month — had been decimated at the hand of Facebook’s fact-checking project. As a result, Facebook limited its monetization options.

Before you cry too hard for America’s Last Line of Defense and its “decimated” reach, note that its sister site Conservative Tears still managed to publish the sixth most-shared story of 2019 so far: a fake claim that Henry Winkler had died.

Fake news, 2016-2018. In a new report, “Truth on the Ballot,” PEN America offers a heavily footnoted chronological overview of fake news developments, research, and news reports since 2016. If at some point you’re trying to remember who exactly did what when, or what awful thing some particular investigation by a digital news outlet revealed, check out this report.

One interesting section is about same-day election misinformation:

On Election Day — which is after all a very specific window of time — fraudulent news featured or shared on social media is well-placed to outstrip the efforts of fact-checkers who might otherwise have more time to debunk fake stories. Twitter, in particular, is perhaps more likely to host disinformation on Election Day itself, given the platform’s focus on immediate and bite-sized commentary. The Brennan Center for Justice, in its review of online voter suppression during the 2018 elections — with “voter suppression” referring to fraudulent news as well as other tactics designed to suppress voter turnout — declared that they found notable online suppression campaigns “especially” on Twitter. In its Retrospective Review of the midterm elections, Twitter reported, “The vast majority of violative content we removed from our service on Election Day was voter suppressive content,” amounting to 6,000 tweets.

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura@niemanlab.org) or Bluesky DM.
POSTED     March 15, 2019, 10:25 a.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Two-thirds of news influencers are men — and most have never worked for a news organization
A new Pew Research Center report also found nearly 40% of U.S. adults under 30 regularly get news from news influencers.
The Onion adds a new layer, buying Alex Jones’ Infowars and turning it into a parody of itself
One variety of “fake news” is taking possession of a far more insidious one.
The Guardian won’t post on X anymore — but isn’t deleting its accounts there, at least for now
Guardian reporters may still use X for newsgathering, the company said.