The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
Do people mainly share misinformation because they get distracted? A new working paper suggests that “most people do not want to spread misinformation, but are distracted from accuracy by other salient motives when choosing what to share.” And when the researchers — Gordon Pennycook, Ziv Epstein, Mohsen Mosleh, Antonio Arechar, Dean Eckles, and David Rand — DM’d Twitter users who’d shared news from unreliable websites, they found that “subtly inducing people to think about the concept of accuracy decreases their sharing of false and misleading news relative to accurate news.”
So why this disconnect between accuracy judgments and sharing intentions? Is it that we are in a "post-truth world" and people no longer *care* much about accuracy?
Probably not!
Those same Turkers overwhelmingly say that its important to only share accurate information.
3/ pic.twitter.com/W1UA6VGSBd— David G. Rand (@DG_Rand) November 17, 2019
We test these views by making concept of accuracy top-of-mind. If people already recognize whether content is accurate but just don’t care much, accuracy salience should have no effect. But if problem is distraction, then accuracy salience should make people more discerning.
5/— David G. Rand (@DG_Rand) November 17, 2019
Finally, we test our intervention "in the wild" on Twitter. We build up a follower-base of users who retweet Breitbart or Infowars. We then send each user a DM asking them to judge the accuracy of a nonpolitical headline (w DM date randomly assigned to allow causal inference)
7/ pic.twitter.com/xNYMJD9rB9— David G. Rand (@DG_Rand) November 17, 2019
We find a significant increase in the quality of news posted after receiving the accuracy-salience DM: 1.4% increase in avg quality, 3.5% increase in summed quality, 2x increase in discernment. Users shift from DailyCaller/Breitbart to NYTimes!
9/ pic.twitter.com/52fAceFUPu— David G. Rand (@DG_Rand) November 17, 2019
“Our accuracy message successfully induced Twitter users who regularly shared misinformation to increase the quality of the news they shared,” the authors write. They suggest the effect of the intervention may not last long, but the platforms could increase it by continuing to offer nudges:
Because of the nature of our experimental design, we weren't really powered to test for long-term effects. My guess is that it probably didn't last that long – but its a treatment that the platforms could deliver regularly (e.g. with pop-ups in the newsfeed) pic.twitter.com/mxFaPHeeRi
— David G. Rand (@DG_Rand) November 19, 2019
"many people are capable of detecting low-quality news content, but nonetheless share such content online because social media is not conducive to thinking analytically about truth and accuracy.”https://t.co/FkJkuU5yx8
— Craig Silverman (@CraigSilverman) November 19, 2019
What are conspiracy theorists like? Researchers tracked Reddit users over eight years to figure out how they ended up as active members of the r/conspiracy subreddit.
We undertook an exploratory analysis using a case control study design, examining the language use and posting patterns of Reddit users who would go on to post in r/conspiracy. (the r/conspiracy group). We analyzed where and what they posted in the period preceding their first post in r/conspiracy to understand how personal traits and social environment combine as potential risk factors for engaging with conspiracy beliefs.Our goal was to identify distinctive traits of the r/conspiracy group, and the social pathways through which they travel to get there. We compared the r/conspiracy group to matched controls who began by posting in the same subreddits at the same time, but who never posted in the r/conspiracy subreddit. We conducted three analyses.
First we examined whether r/conspiracy users were different from other users in terms of what they said. Our hypothesis was that users eventually posting in r/conspiracy would exhibit differences in language use compared to those who do not post in r/conspiracy, suggesting differences in traits important for individual variation.
Second, we examined whether the same set differed from other users in terms of where they posted. We hypothesized that engagement with certain subreddits is associated with a higher risk of eventually posting in r/conspiracy, suggesting that social environments play a role in the risk of engagement with conspiracy beliefs.
Third, we examined language differences after accounting for the social norms of where they posted. We hypothesized that some differences in language use would remain after accounting for language use differences across groups of similar subreddits, suggesting that some differences are not only a reflection of the social environment but represent intrinsic differences in those users.
There were “significant differences” between the 15,370 r/conspiracy users and a 15,370-user control group. Here are some of those differences:
If you’d like to spend more time with conspiracy theorists, CNN’s Rob Picheta took a trip to the third annual Flat Earth International Conference. Here’s Mark Sargent, the “godfather of the modern flat-Earth movement” and the subject of the 2018 Netflix documentary “Behind the Curve”:
“I don’t say this often, but look — there is a downside. There’s a side effect to flat Earth … once you get into it, you automatically revisit any of your old skepticism…I don’t think [flat Earthers and populists] are just linked. They kind of feed each other … it’s a slippery slope when you think that the government has been hiding these things. All of a sudden, you become one of those people that’s like, ‘Can you trust anything on mainstream media?'”
What if the most effective deepfake video is actually a real video? And to end on a downer, just like last week, here’s Craig Silverman:
Everyone thinks there will be a rather effective deepfake video, but I wonder if, in the next year, will we see something that is actually authentic being effectively dismissed as a deepfake, which then causes a mass loss of trust.
If there is an environment in which you can undermine not what is fake, and make it convincing, but undermine what is real — that is even more of a concern for me.