The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
New research published in PNAS by Duke’s Chris Bail and others suggests that “disrupting selective exposure to partisan information among Twitter users” can actually backfire — and that conservatives who are exposed to liberal views actually become more entrenched in their previous beliefs, while liberals exposed to conservative viewpoints don’t double down nearly as much.
The researchers came up with a group of liberal and conservative U.S. Twitter users, then:
2/5 We surveyed them about their views on social policy issues and then offered half of them financial compensation to follow bots we built that exposed them to opposing political views. pic.twitter.com/cYZG42cZuM
— Chris Bail (@chris_bail) August 28, 2018
And here’s what happened:
3/5 Instead of reducing political polarization, our intervention increased it. Republicans expressed substantially more conservative views post treatment, and liberals expressed slight increases in liberalism (though these were not statistically significant). pic.twitter.com/9tJsme5xx3
— Chris Bail (@chris_bail) August 28, 2018
The researchers also surveyed the users regularly to make sure that they were actually seeing the bots’ messages. From the paper:
Although treated Democrats exhibited slightly more liberal attitudes posttreatment that increase in size with level of compliance, none of these effects were statistically significant. Treated Republicans, by contrast, exhibited substantially more conservative views posttreatment. These effects also increase with level of compliance, but they are highly significant. Our most cautious estimate is that treated Republicans increased 0.12 points on a seven-point scale, although our model that estimates the effect of treatment upon fully compliant respondents indicates this effect is substantially larger (0.60 points). These estimates correspond to an increase in conservatism between 0.11 and 0.59 standard deviations.
There are caveats — most people in the U.S. aren’t on Twitter; this was a bot not a person; people who identify as independents weren’t surveyed; it seems highly possible that the financial incentives skewed the results in some way; and (as always!) this is Twitter, not…all of real life. Still, the study reveals “significant partisan differences in backfire effects,” and “we found no evidence that exposing Twitter users to opposing views reduces political polarization.”
This doesn’t mean that filter bubbles aren’t a problem.
5/ It's actually _because_ of filter bubbles that most online content is so uniquely bad at persuading outgroup members. When you're in a filter bubble you lose the ability the understand the other side, and you don't have much incentive to reach out to them either.
— Chris Said (@Chris_Said) August 28, 2018
The findings do suggest, however, that (once again) changing people’s minds is really hard — and that conservatives’ minds may be particularly difficult to change. Keep an eye out for more research from this new Duke Polarization Group. And, by the way, it looks as if Twitter is now suggesting accounts to unfollow.
Moving through a “space of hate.” What do we do with the “active haters” on Twitter — the really bad racists and misogynists, the ones who use the most awful words? This week at Northeastern’s Preconference on Politics and Computational Social Science, Northeastern professor Nick Beauchamp shared some recent research on “the light and dark side of online bubbles” — and how some of the most racist, misogynistic Twitter users “move through the space of hate throughout their careers.” (“There’s a bunch of Dutch people in our dataset,” “reflecting a recent surge in engagement in U.S. politics and hate speech by Dutch speakers.”) The research is from a forthcoming paper, “Trajectories of hate: mapping individual racism and misogyny on Twitter.”
Beauchamp and fellow authors Sarah Shugars and Ioana Panaitiu came up with a set of 1,000 “active haters,” Twitter users who both follow many members of the rightwing elite and also use a lot of hateful language (based on the list from Hatebase). What they wanted to know, in Beauchamp’s words: Does the “consumption benefit of racism shift…after the football season kicks in, or something like that?” It turns out, it kind of does: They saw a “clockwise aggregate flow” for tweets containing racist and misogynistic language.
I asked Beauchamp to explain what that means, and here’s what he told me:
The most virulent haters (both racists and misogynists) do seem to have an overall flow where the worst hate does eventually diminish, but what they’re doing instead — other topics, or just the same topics with less hate — is just speculative at this point. Our original theory was more about how they get there than about how they come back, and insofar as we had a theory of return it was more about general interests eventually shifting or regression to the mean than anything more specific — though it will definitely be worth looking more closely at how they get better, for those who do. The last figure in the paper does show a general flow from racism to misogyny, which may suggest where some of those racists went — though not a very optimistic outcome!
The researchers also find that racist and misogynistic speech are deeply connected. From the paper:
Most notably, hate speech of various forms are densely interconnected, with misogyny in particular intertwined with almost all other forms of hate. While racist speech is largely about or directed at black individuals or black people in general, misogynist speech appears both frequently in conjunction with specific (often Democratic in this corpus) women, as well as via terms of opprobrium for other men, including amidst Islamophobic and white-on-white attacks.
While this finding would be no surprise to scholars who emphasize intersectionality or historians of white supremacist movements, it is difficult to situate within the public opinion literature in American politics, which has typically treated attitudes about race and attitudes about gender as two separate entities.