The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
“Passive misinformation” is a problem for The Hill and other mainstream media outlets. The liberal Media Matters did a study of how news organizations handle misleading claims and lies from Trump. “Passive misinformation is a problem for outlets across the board,” they found after a review of “more than 54,000 tweets sent between 12 a.m. EST on January 26 and 12 a.m. EST on February 16 from the following Twitter feeds of U.S. wire services; major broadcast, cable, and radio networks; national newspapers; and Capitol Hill newspapers and digital outlets that cover Congress and the White House…The sample of more than 54,000 tweets was then narrowed down to a sample of about 2,000 tweets referencing comments Trump made.
Media outlets put a great deal of focus on Trump’s comments — roughly one out of every five tweets mentioning Trump was about a particular quote. We found that that content strategy leaves outlets vulnerable to passing on the president’s misinformation, as 30% of those Trump quotes contained a false or misleading claim.
News outlets can report on Trump’s falsehoods without misleading their audience if they take the time to fact-check his statements within the body of their tweets. But we found that that isn’t happening consistently — in nearly two-thirds of tweets referencing false or misleading Trump claims, the media outlets did not dispute Trump’s misinformation.
All told, the Twitter feeds we studied promoted false or misleading Trump claims without disputing them in 407 tweets over a three-week period — an average of 19 undisputed false claims published each day.
The worst offender? The Hill.
The Twitter feed of The Hill, which has 3.25 million followers, was by far the worst offender we reviewed, producing more than 40 percent of the tweets that pushed Trump’s misinformation without context over the entire study. It promoted Trump’s falsehoods without disputing them 175 times — an average of more than eight per day. These numbers are so high in part because the outlet tweets about Trump far more frequently than other outlets, generating about a quarter of the total data. That high volume led to the outlet tweeting about false or misleading Trump claims 200 times. The feed rarely disputes the Trump claims it tweets about, instead simply passing along the misinformation 88 percent of the time. The Hill also frequently resends the same tweet at regular intervals, not only amplifying his falsehoods, but also making it more likely that the misinformation will stick with its audience through the power of repetition.
“Social media and making extreme news have evolved in tandem over the last 10 or 20 years.” The Verge’s Jacob Kastrenakes took a closer look at the 12 research projects that are getting access to Facebook data. Here are a couple:I went into this study questioning whether anecdotal reports of media amplifying Trump's misinformation with bad headlines and tweets might be overstated due to confirmation bias.
The data turned out to show much worse media performance than I expected. https://t.co/8OV2FGddbn pic.twitter.com/zAfQ04meuN
— Matthew Gertz (@MattGertz) May 3, 2019
A project led by R. Kelly Garrett, an associate professor at Ohio State University, will look at whether there are predictable patterns that lead to sharing fake and dubious news stories. Facebook’s data, Garrett says, will provide things that traditional methods of data gathering can’t offer. “People can’t reliably tell you, ‘I usually share stuff I haven’t bothered reading in middle of the night, in spring, on weekends,’” he says. “People don’t know or have incentives not to tell you the truth.” Garrett hopes to identify patterns that track across social media networks, which could help online platforms make changes to discourage the sharing of fake stories.
Several research groups will also take advantage of the ability to study sharing behaviors on Facebook before and after an algorithm change designed to promote friends over media sources. “What’s tricky is that both social media and making extreme news have evolved in tandem over the last 10 or 20 years,” says Nicholas Beauchamp, an assistant professor at Northeastern University, who’s leading a research group that’s studying how peer sharing affects the polarization of news. His group will look across the algorithm change to see whether peer sharing changes the rates of fake news. “We have this nice little kind of natural experiment,” he says, “where suddenly there’s this unexpected shift towards much more peer sourced information.”
“A spiral of silence.” Institute for the Future (IFTF) released a report and eight case studies about how particular social and issue-focused groups in the U.S. — Muslim Americans, Latinos, moderate Republicans, black women gun owners, environmental activists, pro-choice and pro-life campaigners, immigration activists; and Jewish Americans — were the targets of misinformation (on Twitter) during the 2018 midterms and likely will be again in 2020. The research was first written up by BuzzFeed. Samuel Woolley, the director of IFTF’s Digital Intelligence Lab, told Craig Silverman and Jane Lytvynenko: “We think that the general goal of this [activity] is to create a spiral of silence to prevent people from participating in politics online, or to prevent them from using these platforms to organize or communicate.”
Social & issue-focused groups are often the primary targets of computational propaganda: eight new case studies from the DigIntel Lab @iftf on the human consequences of #computationalpropaganda. https://t.co/acM7oLkn3z
— Samuel Woolley (@samuelwoolley) May 7, 2019
Spiral of silence theory is the idea, first proposed by the German political scientist Elisabeth Noelle-Neumann in 1974, that people who fear their positions are unpopular will choose not to voice them in order not to face social isolation. Her original conception focused primarily on unpopular opinions and their portrayal in mass media; with the rise of social media, it has also been applied to generally accepted beliefs that can prompt harassment from a small but aggressive group.
Here are the researchers’ main findings:
1) Human social media users, not bots, produced the majority of harassment — but bots continue to be used to seed and promote coordinated disinformation narratives;
(2) Adversarial groups are co-opting images, videos, hashtags, and information previously used or generated by social and issue-focused groups—and then repurposing this content in order to camouflage disinformation and harassment campaigns;
(3) Disinformation campaigns utilize age-old stereotypes and conspiracies — often attempting to foment both intra-group polarization and external arguments with other groups; and
(4) Social media companies’ responses to curb targeted harassment and disinformation campaigns have not served to effectively protect the groups studied here from such content.
New research from @iftf shows that disinformation disproportionally targets latinos, muslims, and jews on social media with harassment, highjacked conversations, and flat out falsehoods spread by extremists: https://t.co/kFbFrx3Wpr
— Jane Lytvynenko 🤦🏽♀️🤦🏽♀️🤦🏽♀️ (@JaneLytv) May 7, 2019
This coincides with other research done in this field. According to a 2019 report from the ADL, 27% of black Americans, 30% of Latinos, 35% of Muslims, and 63% of the LGBTQ+ communities in the United States have been harassed online because of their identity.
— Jane Lytvynenko 🤦🏽♀️🤦🏽♀️🤦🏽♀️ (@JaneLytv) May 7, 2019
IFTF studies show that disinformation and harassment go hand-in-hand and cannot be separated.
Activists report their messages getting highjacked by their opponents. In the study on anti-immigration disinformation, highjacked narratives outpaced genuine ones in all cases but one pic.twitter.com/19yRdW1Cpp
— Jane Lytvynenko 🤦🏽♀️🤦🏽♀️🤦🏽♀️ (@JaneLytv) May 7, 2019