Nieman Foundation at Harvard
HOME
          
LATEST STORY
Two-thirds of news influencers are men — and most have never worked for a news organization
ABOUT                    SUBSCRIBE
Oct. 5, 2018, 10:17 a.m.
Audience & Social

A new study provides some dispiriting evidence for why people fall for stupid fake images online

Plus: A U.K. report calls for governments to tread cautiously when it comes to fake news, as some other governments seem prepared to do the opposite.

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

C’mon, guys, look at the source! So if you’re assessing the credibility of a possibly fake image online, you’re looking at stuff like the source, how many times it’s been shared, and what the image shows, right? Not so much, according to a new study out of UC Davis. Instead, what matters are digital media literacy skills, experience or skill in photography, and prior attitudes about the issue.

The researchers asked 3,476 participants on Mechanical Turk to evaluate the credibility of six fake images, some of which were allegedly from sources like BuzzFeed and The New York Times. “The images depicted (1) a bridge collapse in China, (2) a gay couple accompanied by their children, 3) genetically modified mouse with a cat’s head, 4) a school in Africa, (5) a bombing in Syria, and (6) a Hispanic politician meeting with students.”

Our goal was to understand (1) how viewers evaluate image credibility online and (2) what contextual cues and features (image-related and non-image-related) impact their credibility judgment. By including only fake images in the study instead of using unaltered, original images, we wanted to avoid the possibility of the participants having previously encountered one of the images. Such familiarity could have influenced their credibility evaluation without consideration of the features we attempted to test in this study.

They found that “viewers’ skills and experience greatly impact their image credibility evaluations” — people who had a lot of experience on the internet and social media, and people who had some background in photography, were better at evaluating image credibility. But…

“None of the image context features tested — for example, where the image was posted or and how many people liked it — had an impact on participants’ credibility judgments. Our findings also reveal that credibility evaluations are far less impacted by the content of an online image. Instead they are influenced by the viewers’ backgrounds, prior experiences, and digital media literacy.”

In other words, seeing that an image was from BuzzFeed, NPR, or The New York Times didn’t make users more or less likely to judge them as credible. The researchers also re-ran part of the experiment (judging the image of the genetically modified cat/mouse) with a group of college students and saw similar results.

Shen called the findings of her study “upsetting,” and there is indeed a strain of #lolnothingmatters here that should be somewhat alarming both to publishers and to proponents of fake-news-fighting tools that rely heavily on identifying reliable sources.

How do we react to misinformation without freaking out? The U.K. fact-checking organization Full Fact released a report this week arguing that “rushing to come up with quick solutions to the range of issues could do more harm than good.” Or, as the authors write, “People getting things wrong online is not in itself a harm that merits a policy response.”

“Recognize that the greatest risk is of government overreaction and put the protection of free speech at the forefront of every discussion about tackling misinformation in its many forms,” the authors write. “We should take advantage of the window of opportunity we have to consider and deliver a proportionate response.” The areas where they see the most urgency are updating U.K. laws and transparency around political advertising.

The report also includes a chart showing the misinformation-related actions that countries around the world are taking. Craig Silverman wrote for BuzzFeed News this week about a proposed fake news law in Singapore that, if passed, “would be the most far-reaching fake news law passed by a government so far.” Singapore ranks 150 out of 180 in Reporters Without Borders’ World Press Freedom Index. The law would give the government

“powers to swiftly disrupt the spread and influence of online falsehoods” and to prevent people from earning money from online falsehoods. It calls for criminal penalties for those who meet a threshold of “serious harm such as election interference, public disorder, and the erosion of trust in public institutions.”

“If it establishes a new model for the speedy removal of content from Facebook, Google, and Twitter,” Silverman writes, “it could be emulated by others.”

And BuzzFeed’s Davey Alba and Charlie Warzel reported that Facebook recently met with “academics, researchers, and civil society organizations from Myanmar, the Philippines, Sri Lanka, and elsewhere to discuss misinformation and propaganda.” One thing they reportedly discussed was “country-specific policies and community standards tailored to crucial cultural nuances of the regions, rather than blanket policies for [Facebook’s] 2.23 billion users worldwide.” One hypothetical:

A piece of inflammatory or violent content that would typically be quickly contextualized or deemed newsworthy in the United States and parts of Europe might be allowed to remain on the platform, while it would be removed in other countries where it is more likely to be quickly decontextualized, weaponized, and reposted.

Wikipedia bans Breitbart as a source of facts. Breitbart should “not be used, ever, as a reference for facts, due to its unreliability,” Wikipedia editors voted last month. From Motherboard’s Samantha Cole:

Last week, Wikipedia editors held a similar vote for Occupy Democrats, a progressive website. As it’s a political activist movement outlet, and not a reliable news source, it can’t be cited on Wikipedia as fact.

They also started a discussion for InfoWars’ reliability, but the vote was closed, citing the “Snowball Clause,” that no sane editor would cite InfoWars in the first place (giving it a “snowball’s chance in hell” that it’d be used and left up on the site except in exceptional cases.) In any case, InfoWars is also generally banned from being a source on the site.

“‘Information noise’ is the promulgation of unassailable, ultimately trivial facts deceptively packaged as meaningful news.”

Illustration from L.M. Glackens’ The Yellow Press (1910) via The Public Domain Review.

Laura Hazard Owen is the editor of Nieman Lab. You can reach her via email (laura@niemanlab.org) or Bluesky DM.
POSTED     Oct. 5, 2018, 10:17 a.m.
SEE MORE ON Audience & Social
PART OF A SERIES     Real News About Fake News
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Two-thirds of news influencers are men — and most have never worked for a news organization
A new Pew Research Center report also found nearly 40% of U.S. adults under 30 regularly get news from news influencers.
The Onion adds a new layer, buying Alex Jones’ Infowars and turning it into a parody of itself
One variety of “fake news” is taking possession of a far more insidious one.
The Guardian won’t post on X anymore — but isn’t deleting its accounts there, at least for now
Guardian reporters may still use X for newsgathering, the company said.