The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
Instagram fact-checking hits some bumps. A Photoshopped image of some painted hills by a graphic designer was declared “false” by one of Instagram’s new fact-checking partners, spurring fears that artists’ work on the platform would be more broadly blocked. (The Next Web: “Instagram’s decision to hide photoshopped images is a disservice to art.”)
“We will treat this content the same way we treat all misinformation on Instagram,” a spokesperson told The Verge. “If third-party fact-checkers mark it as false, we will filter it from Instagram’s recommendation surfaces like Explore and hashtag pages.”
The company ended up backtracking, though: The fact-checking partner, Indian news site NewsMobile, reversed its fact-check (which was first published last year, under the headline “Here’s the truth behind these mesmerizing rainbow mountains”) and the image was set free.
It’s a tough line for Instagram to walk as it tries to filter out misinformation and bad-faith faked images while leaving art — you know, art-art, the good kind — alone. Instagram users also called out the platform this month for fact-checking a Warren Buffett meme while leaving alone politicians’ lies and political ads. (Trump’s approval rating with Republicans is nowhere near as high as he says it is; his overall approval rating is currently around 42 percent.)@instagram will publish an independent fact check of this dumb meme but won’t fact check the fuckin president #okZuck pic.twitter.com/eJJD5cqRcd
— Upside down DQ Blizzard (@arisfromparis14) January 13, 2020
.@instagram thank you for leading the fight against misinformation with this hard hitting fact check. pic.twitter.com/kAg4A8uxsQ
— Logan Hall (@LoganHallNews) January 11, 2020
If you need a Instagram fact check to see that this is fake natural selection needs to speed its shit up pic.twitter.com/Iev95RsX3T
— Jack (@SovEnthusiast) January 10, 2020
I took a cool picture of some trees, but you gotta go to https://t.co/kSq3khZriY to see it. Instagram fact-checks jokes. #Trees #Outdoors #Leaves #Skaneateles #Landscape #CNY #CentralNY #Nature #Fog https://t.co/DjDj47qCDw pic.twitter.com/725uuncdYh
— TheSkepticalAtheist (@XepticalAtheist) January 14, 2020
The downside of slapping on that big publisher logo. New academic journal about misinformation? Yes, thank you. The Shorenstein Center at Harvard’s Kennedy School just launched the Misinformation Review, for which “content is produced and ‘fast-reviewed’ by misinformation scientists and scholars, released under open access, and geared towards emphasizing real-world implications.” There’s a bunch of good stuff in the first issue — here is some:
— Emphasizing the publisher of an article doesn’t do much to make people better B.S. detectors. Various trusting-news initiatives have suggested that adding more context to news stories shared on social media can make people more likely to trust it. Seems logical, right? Facebook rolled out a feature that includes this information in 2018.But it turns out that “increasing the visibility of publishers is an ineffective, and perhaps even counterproductive, way to address misinformation on social media,” Nicholas Dias, Gordon Pennycook, and David Rand find in a new study in which they showed participants real headlines from social media, in Facebook’s format. In some of the cases, publisher information was emphasized; in others, it was removed.
They found no effect:
We found that publisher information had no significant effects on whether participants perceived the headline as accurate, or expressed an intent to share it — regardless of whether the headline was true or false. In other words, seeing that a headline came from a misinformation website did not make it less believable, and seeing that a headline came from a mainstream website did not make it more believable.
In a follow-up survey, the researchers found that
providing publisher information only influenced headline accuracy ratings when headline plausibility and publisher trust were “mismatched” — for example, when a headline was plausible but came from a distrusted publisher (e.g., fake-news or hyperpartisan websites).In these cases of mismatch, identifying the publisher reduced accuracy ratings of plausible headlines from distrusted publishers, and increased accuracy ratings of implausible headlines from trusted publishers.
However, when we fact-checked the 30% of headlines from distrusted sources in our set that were that were rated as plausible by participants, we found they were mostly true. In other words, providing publisher information would have increased the chance that these true headlines would be mistakenly seen as false — raising the possibility of unintended negative consequences from emphasizing sources.
The lesson? “These observations underscore the importance of social media platforms and civil society organizations rigorously assessing the impacts of interventions (source-based and otherwise), rather than [implementing] them based on intuitive appeal.”
— The number of Americans who believe misinformation about vaccines is relatively high. People who get their information about vaccines from social media are more likely to believe misinformation than people who get their information from traditional media.
Dominik Andrzeg Stecula, Ozan Kuru, and Kathleen Hall Jamieson surveyed a nationally representative sample of nearly 2,500 U.S. adults and found that a relatively high percentage of people are misinformed about vaccines:
18% of our respondents mistakenly state that it is very or somewhat accurate to say that vaccines cause autism, 15% mistakenly agree that it is very or somewhat accurate to say that vaccines are full of toxins, 20% wrongly report that it is very or somewhat accurate to say it makes no difference whether parents choose to delay or spread out vaccines instead of relying on the official CDC vaccine schedule, and 19% incorrectly hold that it is very or somewhat accurate to say that it is better to develop immunity by getting the disease than by vaccination.
The biggest indicator of whether or not someone believes the misinformation? Distrust of medical authorities.
Mistaken beliefs were also “remarkably consistent over a five-month period” in 2019, but the people who became more misinformed over that period “said that they were exposed to an increased amount of content about measles or the Measles, Mumps, and Rubella (MMR) vaccine on social media.”
Overall, U.S. confidence in vaccines has declined: Gallup said this week that 84 percent of U.S. adults believe vaccinating children is important, down from 94 percent in 2001, and “the only group that has maintained its 2001 level of support for vaccines is highly educated Americans, those with postgraduate degrees.”