The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
“It’s easier to be novel and surprising when you’re not bound by reality.” It’s not bots. It’s us. A paper published on Thursday in Science (it’s the cover story) by MIT’s Soroush Vosoughi, Deb Roy, and Sinan Aral tracks the spread of fake and real news tweets and finds that fake news both reached more people than the truth and spread faster than the truth — BUT there are caveats about the “true” news here: It was mostly news that had been fact-checked by outlets like Snopes and PolitiFact, not some of the legit-crazy real stuff that’s been in the headlines of the nation’s largest papers recently.
The researchers looked at 126,000 “rumor cascades” spread by about 3 million people.
A rumor cascade begins on Twitter when a user makes an assertion about a topic in a tweet, which could include written text, photos, or links to articles online. Others then propagate the rumor by retweeting it. A rumor’s diffusion process can be characterized as having one or more cascades, which we define as instances of a rumor-spreading pattern that exhibit an unbroken retweet chain with a common, singular origin.
Note that not all of the “rumor cascades” were fake; they were a mix of true, false, and partially true stories: “We sampled all rumor cascades investigated by six independent fact-checking organizations (snopes.com, politifact.com, factcheck.org, truthorfiction.com, hoax-slayer.com, and urbanlegends.about.com) by parsing the title, body, and verdict (true, false, or mixed) of each rumor investigation reported on their websites and automatically collecting the cascades corresponding to those rumors on Twitter. The result was a sample of rumor cascades whose veracity had been agreed on by these organizations between 95 and 98 percent.”
The true news was — again — primarily stuff that had been fact-checked; here, for instance, are the “Rating: True” stories on Snopes. Many of them are downright boring and not something that most people would bother tweeting about. (“Was Ex-California State Senator and Gun Control Advocate Leland Yee Arrested for Gun Trafficking?” Yes.) In order to help generalize results beyond fact-checked stories, however, the researchers had three college students analyze a second sample of more than 13,000 rumor cascades that hadn’t been verified by any fact-checking organization. “When we compared the diffusion dynamics of the true and false rumors that the annotators agreed on, we found results nearly identical to those estimated with our main data set,” the authors write.
The breathless coverage of the study itself (“False news 70 percent more likely to spread on Twitter,” “Fake news is 70% more likely to be shared on Twitter than true stories, ‘stunned’ MIT researchers find,” “It’s true: False news spreads faster and wider. And humans are to blame”) soon came under scrutiny:
Good thread on a **really** important study that is being widely misunderstood and misinterpreted. The study is great, fascinating, important and amazing. The interpretations have been wild, and some really misleading. https://t.co/hDNeKtdYls
— zeynep tufekci (@zeynep) March 9, 2018
In The Atlantic, Robinson Meyer elaborated:
Some political scientists also questioned the study’s definition of “news.” By turning to the fact-checking sites, the study blurs together a wide range of false information: outright lies, urban legends, hoaxes, spoofs, falsehoods, and “fake news.” It does not just look at fake news by itself — that is, articles or videos that look like news content, and which appear to have gone through a journalistic process, but which are actually made up.
Therefore, the study may undercount “non-contested news”: accurate news that is widely understood to be true. For many years, the most retweeted post in Twitter’s history celebrated Obama’s re-election as president. But as his victory was not a widely disputed fact, Snopes and other fact-checking sites never confirmed it.
The study also elides content and news. “All our audience research suggests a vast majority of users see news as clearly distinct from content more broadly,” [Rasmus Kleis] Nielsen, the Oxford professor, said in an email. “Saying that untrue content, including rumors, spread faster than true statements on Twitter is a bit different from saying false news and true news spread at different rates.”
But many researchers told me that simply understanding why false rumors travel so far, so fast, was as important as knowing that they do so in the first place.
Agree with the premise of the thread. Our analysis is of *fact-checked* stories. We have a robustness dataset of non-fact checked stories with the same results, but it is much smaller.
— Sinan Aral (@sinanaral) March 9, 2018
And so, with these don’t-freak-out caveats:
Falsehoods “diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information,” the researchers write. “Whereas the truth rarely diffused to more than 1000 people, the top 1% of false-news cascades routinely diffused to between 1000 and 100,000 people,” and many more people retweeted fake than true news. Fake political news “traveled deeper and more broadly, reached more people, and was more viral than any other category of false information. False political news also diffused deeper more quickly and reached more than 20,000 people nearly three times faster than all other types of false news reached 10,000 people.”
Why did the false rumor cascades spread faster? The team looked at a few hypotheses. Turns out A) it’s not bots and B) it’s not because the people who spread fake news have more followers. From a write-up of the paper by Science’s Katie Langin:
At first the researchers thought that bots might be responsible, so they used sophisticated bot-detection technology to remove social media shares generated by bots. But the results didn’t change: False news still spread at roughly the same rate and to the same number of people. By default, that meant that human beings were responsible for the virality of false news.
That got the scientists thinking about the people involved. It occurred to them that Twitter users who spread false news might have more followers. But that turned out to be a dead end: Those people had fewer followers, not more.
Finally the team decided to look more closely at the tweets themselves. As it turned out, tweets containing false information were more novel—they contained new information that a Twitter user hadn’t seen before — than those containing true information. And they elicited different emotional reactions, with people expressing greater surprise and disgust. That novelty and emotional charge seem to be what’s generating more retweets.
“It’s easier to be novel and surprising when you’re not bound by reality,” coauthor Roy said told Scientific American’s Larry Greenemeier.
The paper’s attracting tons of praise in my people-who-study-fake-news Twitter list, and coauthor Aral, who is the David Austin professor of management at MIT, wrote it up for this week’s New York Times Sunday Review section. “Though it was disheartening to learn that humans are more responsible for the spread of false stories than previously thought,” he writes, “this finding also implies that behavioral interventions may succeed in stemming the tide of falsity.”
Jonathan Swift once wrote "Falsehood flies, and truth comes limping after it". An extremely cool study in @sciencemagazine supports Swift's claim: https://t.co/cQ9CDs5sde
Upshot: Fake news is not constrained by reality & is therefore more interesting and more likely to spread. pic.twitter.com/GveCokctuq
— Gordon Pennycook (@GordPennycook) March 8, 2018
Excellent, sound, interesting study. One correction: Twitter is NOT the "news" sphere, just like Facebook isn't "social media." Twitter is a relatively small component of much larger ecosystem. Every service has different network structure & uses/affordances. Extremely important. https://t.co/hCOppDQ9ot
— J0nathan A1bright (@d1gi) March 8, 2018
And while you’re reading Science… The issue also includes “The science of fake news,” an overview by a veritable Who’s Who of academics who study this stuff (deep breath: David Lazer, Matthew Baum, Yochai Benkler, Adam Berinsky, Kelly Greenhill, Filippo Menczer, Miriam Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven Sloman, Cass Sunstein, Emily Thorson, Duncan Watts, Jonathan Zittrain). This article presents something of a summary of existing fake news research (with links to papers), lists the questions that remain, and suggests some possible interventions and solutions. One of their ideas (hint, hint):
We urge the platforms to collaborate with independent academics on evaluating the scope of the fake news issue and the design and effectiveness of interventions. There is little research focused on fake news and no comprehensive data-collection system to provide a dynamic understanding of how pervasive systems of fake news provision are evolving. It is impossible to recreate the Google of 2010. Google itself could not do so even if it had the underlying code, because the patterns emerge from a complex interaction among code, content, and users. However, it is possible to record what the Google of 2018 is doing. More generally, researchers need to conduct a rigorous, ongoing audit of how the major platforms filter information.
Along those lines, note that Twitter this week posted an opening for a new position, director of social science, whose duties will include “act as liaison between the broader research community and Twitter” and helping to “identify, support, and develop partnerships with external researchers.”
This isn’t enough academics, I want more. The Exploring Media Ecosystems conference was held at MIT this week, and you might want to take a scroll through the #emeMIT hashtag. Some highlights (and, yes, this includes links to two lists of even more research into fake news):
Here’s @EthanZ’s minimal description of the problems we face in media ecosystems work #emeMIT pic.twitter.com/uMvsIW8aR0
— Media Cloud (@media_cloud) March 5, 2018
My main takeaway from #emeMIT so far: we really, really need to connect the East Coast critics with the West Coast engineers. Engineers aren't up on the research. Critics cannot suggest better algorithms. I go between these worlds. Talk to me.
— jonathanstray (@jonathanstray) March 5, 2018
Fine to have 32 conferences on misinformation, explains @cward1e, but it would be great if we could find some common issues, terminology, and a frame that works beyond Facebook and US politics. #emeMIT
— Ethan Zuckerman (@EthanZ) March 5, 2018
For those interested in #emeMIT, First Draft, @AnnenbergPenn and @knightfdn hosted a workshop in December for researchers and practitioners to discuss disinformation ecosystems. Dozens of short papers were produced by its attendees. Here's the collection: https://t.co/igasYZFu1e
— First Draft (@firstdraftnews) March 6, 2018
CW has shared this doc she's working on as an Ongoing Collection of Work by Journalists & Researchers on Disinformation https://t.co/nChSOLOcUp #emeMIT
— an xiao mina (@anxiaostudio) March 5, 2018