People fall for easy falsehoods even when they’re told they’ll earn money for answering correctly, according to the results of a new study published in pre-print form (meaning the study has yet to be peer-reviewed).
The concept behind the finding is known as illusory truth, which is the idea that repeating false claims makes them more truthful.
Previous research has shown that repeating news headlines — even if they are of false items — can cause people to increase their belief in that item. This was a technique that was on full display during the Trump era, with repeated lies about the most basic facts, more complicated statements, and everything in between.
The authors behind the study, Purdue University’s Nadia Brashier and Massachusetts Institute of Technology’s David G. Rand, had more than 1,100 participants — found on Amazon’s Mechanical Turk marketplace — take part in two distinct experiments.
In the first experiment, roughly 600 of the volunteers were shown a set of 16 statements. The statements were trivia-like facts, but the researchers also included within this set some falsified, but still plausible, statements. For instance, one of the statements said that the Angel Falls are in Brazil. (False. They are in Venezuela.)
Participants were then asked to rank how interesting they found these statements. “The interest ratings themselves aren’t especially useful to us. It was just a way to make sure people are actually taking in the information,” Brashier said.
The volunteers were immediately asked to take another test, this time with 16 new statements added in to the previous 16, and asked to provide a binary “true” or “false” response for each of them. The goal: to see how just one previous exposure to false statements would affect people’s memories and likelihood of retaining that false information.
There was a secondary component to this experiment, where half the participants in the group were told that at the beginning of the questionnaire that one answer would be randomly chosen, and if the answer was correct, the participant would receive a $1 bonus. The goal for this was to see if offering an incentive for paying attention to accuracy would improve accuracy. The other half of the volunteer group was offered no such incentive.
Brashier and Rand found that offering an incentive didn’t seem to improve accuracy. “That’s what I think is one of the scary things about this work. Just one previous exposure is enough to make information seem more credible,” Brashier said. Other research points to this being true for headlines as well as for claims about products, Brashier added.
After this quiz-like test, participants took another test. This time, though, instead of presenting the volunteers with true-or-false statements, they had participants complete a cognitive reflection test to see if the possibility of more analytical thinkers reasoning their way through a question to a right answer made them likely to answer something correctly. For instance, one of the questions this experiment asked was, “If you’re running a race and you pass the person in second place, what place are you in?” (People might be tempted to say “first,” when the correct answer is “second.”)
The researchers repeated the whole experiment with the other nearly 590 volunteers who were recruited. Only this time, instead of a single prompt at the top of the trivia portion of the test reminding people of the $1 bonus, the people sorted into the bonus-eligible arm of the study were reminded of the availability of a $1 bonus each time they clicked through on the 32 true-false statements they had to answer. The cognitive reflection portion was the same.
Brashier said that the repeated reminders of the available $1 bonus was added to the second experiment to counter any forgetfulness on the part of the participants in the first experiment, where just a single reminder about the incentive may have meant they’d lost sight of it.
And still, nothing changed. “It’s definitely not the case that analytic thinkers were more responsive to the monetary incentive and better able to disengage [from following flawed thinking],” Brashier said.
In previous work, Brashier and her colleagues showed that one way to counter the effects of illusory truth is to have people behave like fact-checkers, by rating the truth of given statement and therefore forcing them to focus on accuracy. But this worked only when people had prior knowledge of a subject. And because people vary in how much political knowledge they have, Brashier explained, using an accuracy focus to combat repeated misinformation in news and headlines may not necessarily work.
In general, the brain’s tendency to link repetition with truth is not actually a bad thing. We rely on this feature — through rhymes, mnemonics and other tactics — for retaining information all the time. “We actually learn from experience that repetition and truth are correlated,” Brashier said, because we have learned to associate fluency with something being true, and so it’s easier to fall into that kind of thinking. “But, in our post-truth world, that can lead us to believe things that are false.”