The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
“Crocodiles sleep with their eyes closed”? Just out: The Psychology of Fake News: Accepting, Sharing, and Correcting Misinformation, a new collection of research articles edited by Rainer Greifeneder, Mariela Jaffe, Eryn Newman, and Norbert Schwarz. The book, published by Routledge, is available as a free download or online read (and here it is on Kindle), and it includes a lot of research on why and how people believe false information.
In several of the chapters, researchers look at “what makes a message ‘feel’ true, even before we have considered its content in any detail,” and consider the implications of this for misinformation. Here are some things that make people believe something is true:
The influence of repetition is most pronounced for claims that people feel uncertain about, but is also observed when more diagnostic information about the claims is available (Fazio, Rand, & Pennycook, 2019; Unkelbach & Greifeneder, 2018). Worse, repetition even increases agreement among people who actually know that the claim is false — if only they thought about it (Fazio, Brashier, Payne, & Marsh, 2015). For example, repeating the statement “The Atlantic Ocean is the largest ocean on Earth” increased its acceptance even among people who knew that the Pacific is larger. When the repeated statement felt familiar, they nodded along without checking it against their knowledge. Even warning people that some of the claims they will be shown are false does not eliminate the effect, although it attenuates its size. More importantly, warnings only attenuate the influence of repetition when they precede exposure to the claims — warning people after they have seen the claims has no discernable influence (Jalbert, Newman, & Schwarz, 2019).
Merely having a name that is easy to pronounce is sufficient to endow the person with higher credibility and trustworthiness. For example, consumers trust an online seller more when the seller’s eBay username is easy to pronounce — they are more likely to believe that the product will live up to the seller’s promises and that the seller will honor the advertised return policy (Silva, Chrobot, Newman, Schwarz, & Topolinski, 2017). Similarly, the same claim is more likely to be accepted as true when the name of its source is easy to pronounce (Newman et al., 2014).
Even exposing people to only true information can make it more likely that they accept a false version of that information as time passes. Garcia-Marques, Silva, Reber, and Unkelbach (2015) presented participants with ambiguous statements (e.g., “crocodiles sleep with their eyes closed”) and later asked them to rate the truth of statements that were either identical to those previously seen or that directly contradicted them (e.g., “crocodiles sleep with their eyes open”). When participants made these judgments immediately, they rated repeated identical statements as more true, and contradicting statements as less true, than novel statements, which they had not seen before. One week later, however, identical as well as contradicting statements seemed more true than novel statements. Put simply, as long as the delay is short enough, people can recall the exact information they just saw and reject the opposite. As time passes, however, the details get lost and contradicting information feels more familiar than information one has never heard of — yes, there was something about crocodiles and their eyes, so that’s probably what it was.As time passes, people may even infer the credibility of the initial source from the confidence with which they hold the belief. For example, Fragale and Heath (2004) exposed participants two or five times to statements like “The wax used to line Cup-o-Noodles cups has been shown to cause cancer in rats.” Next, participants learned that some statements were taken from the National Enquirer (a low credibility source) and some from Consumer Reports (a high credibility source) and had to assign the statements to their likely sources. The more often participants had heard a statement, the more likely they were to attribute it to Consumer Reports rather than the National Enquirer. In short, frequent exposure not only increases the apparent truth of a statement, it also increases the belief that the statement came from a trustworthy source. Similarly, well-intentioned efforts by the Centers for Disease Control and the Los Angeles Times to debunk a rumor about “flesh-eating bananas” morphed into the belief that the Los Angeles Times had warned people not to eat those dangerous bananas, thus reinforcing the rumor (Emery, 2000). Such errors in source attribution increase the likelihood that people convey the information to others, who themselves are more likely to accept (and spread) it, given its alleged credible source (Rosnow & Fine, 1976).
[People] were asked to participate in a trivia test where they saw a series of general knowledge claims appear on a computer screen (Newman et al., 2012). The key manipulation in this experiment was that half of the claims appeared with a related non-probative photo [Ed. note: i.e., the photo provided no evidence for the claim one way or the other], much like the format one might encounter in the news or on social media, and half of the claims appeared without a photo. For example, participants in this trivia study saw claims like “Giraffes are the only mammals that cannot jump” presented either with a photo, like the headshot of a giraffe[…]or without a photo. Despite the fact that the photos provided no evidence of whether the claims were accurate or not — the headshot of the giraffe tells you nothing about whether giraffes can jump — the presence of a photo biased people toward saying the associated claims were true. Photos produced truthiness, a bias to believe claims with the addition of non-probative information. In another set of experiments, published in the same article, Newman and colleagues conceptually replicated the finding. In these experiments, participants were asked to play a different trivia game: “Dead or Alive” (a game that a co-author remembered from old radio programing). The key task was to judge whether the claim “This person is alive” was true or false for each celebrity name that appeared on the screen. Half the time, those celebrity names appeared with a non-probative photo — a photo that depicted the celebrity engaged in their profession but did not provide any evidence about the truth of the claim “This person is alive”. For instance, subjects may have seen the name “Nick Cave” with a photo of Nick Cave on stage with a microphone in his hand and singing to a crowd[…]Nothing about the photo provided any clues about whether Nick Cave was in fact alive or not. In many ways, the photos were simply stock photos of the celebrities. The findings from this experiment were clear: people were more likely to accept the claim “This person is alive” as true when the celebrity name appeared with a photo, compared to when there was no photo present. Perhaps more surprisingly, the same pattern of results was found when another group of subjects were shown the same celebrity names, with the same celebrity photos, but evaluated the opposite claim: “This person is dead”. In other words, the very same photos nudged people toward believing not only claims that the celebrities were “alive” but also claims that the same people were “dead”.
Across a series of experiments, Cardwell, Lindsay, Förster, and Garry (2017) asked people to rate how much they knew about various complex processes (e.g., how rainbows form). Half the time, people also saw a non-probative photo with the process statement (e.g., seeing a photo of a watch face with the cue “How watches work”). Although the watch face provides no relevant information about the mechanics of a watch, when people saw a photo with a process cue, they claimed to know more about the process in question. When Cardwell et al. examined actual knowledge for these processes, those who saw photos had explanations that were similar in quality to those who did not see a photo. In the context of fake news and misinformation, such findings are particularly worrisome and suggest that stock photos in the media may not only bias people’s assessments of truth but also lead to an inflated feeling of knowledge or memory about a claim they encounter.
You can check out the full book here.