The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
What would Facebook’s turn to privacy mean for misinformation? This week Mark Zuckerberg published his “privacy-focused vision” for Facebook, writing, “I expect future versions of Messenger and WhatsApp to become the main ways people communicate on the Facebook network…I believe we should be working towards a world where people can speak privately and live freely knowing that their information will only be seen by who they want to see it and won’t all stick around forever.”
If such a shift really happens, what sort of impact would it have on misinformation on the platform?
Our lessons from WhatsApp also suggest that we may be more vulnerable to the effects of misinformation and disinformation in these more private, smaller, and closer social circles. And it will definitely be harder for us (as a collective) to identify/understand those dynamics. https://t.co/SQ7eyxnDUD
— Kate Starbird (@katestarbird) March 7, 2019
Fighting misinformation on WhatsApp (see, e.g., India and elsewhere) is arguably harder, not easier, because it is encrypted on a closed platform. Shift toward messaging would reduce profit incentives for fake news publishers, though. Not clear what net effect will be. https://t.co/AC0WlI1iHJ
— Brendan Nyhan (@BrendanNyhan) March 7, 2019
“It would push some of Facebook’s biggest PR problems under a rug, such as fake news, hate speech, election interference, and harassment, which would become much harder to police — or to hold Facebook accountable for,” Will Oremus argues at Slate. “And it would open new ones, creating ‘dark social’ networks that could be havens for criminal or even terrorist activity, while giving equal shelter to everyone from dissidents to hate groups.”
Wired’s Nicholas Thompson interviewed Mark Zuckerberg about some of this:
If Russian intelligence operatives had just used private encrypted messaging to manipulate Americans, would they have been caught? As Facebook knows from running WhatsApp, which is already end-to-end encrypted, policing abuses gets ever harder as messages get more hidden.
In our interview, Zuckerberg explained that this, not fears about the business model, is what keeps him up at night. “There is just a clear trade-off here when you’re building a messaging system between end-to-end encryption, which provides world-class privacy and the strongest security measures on the one hand, but removes some of the signal that you have to detect really terrible things some people try to do, whether it’s child exploitation or terrorism or extorting people.”
Techdirt’s Mike Masnick found a bright side, arguing it’s possible that “if Facebook were to move to more of a ‘protocols’ approach to messaging, rather than controlling everything, they might then be able to open up the system so that end users themselves could make use of third party apps or filters to help them decide if messages were legit or not, rather than leaving it entirely up to Facebook.” This seems naive: The people who proactively go out and, say, install a fake-news-identifying browser extension are not the problem. At any rate, Facebook is already relying on third-party fact-checkers now to help it police content, so I don’t see how this opens up new opportunities for outside fact-checkers.
Disappearing stories, encryption, private groups. Does not sound like good news for anyone wanting to track disinformation #Facebook
— emily bell (@emilybell) March 6, 2019
And here’s an interesting thread on how political discussion changes when it moves from social media platforms to closed messaging platforms.
tl/dr: political talk on MIMS is relevant and especially appealing for people who feel their views may not be seen positively if expressed in more public social media platforms, particularly in political cultures where talking in public about politics is less common.
— Cristian Vaccari (@25lettori) March 7, 2019
Under pressure, Facebook will block anti-vax content. In a blog post Thursday, Facebook outlined how it will — after weeks of public pressure — curb misinformation related to vaccines.They also suggest that, to some degree, online political talk in private environments may attract types of people who are relatively less likely to discuss politics on social media, for individual or cultural reasons. Not necessarily apolitical, but less politically exuberant.
— Cristian Vaccari (@25lettori) March 7, 2019
— We will reduce the ranking of groups and Pages that spread misinformation about vaccinations in News Feed and Search. These groups and Pages will not be included in recommendations or in predictions when you type into Search.
— When we find ads that include misinformation about vaccinations, we will reject them. We also removed related targeting options, like “vaccine controversies.” For ad accounts that continue to violate our policies, we may take further action, such as disabling the ad account.
— We won’t show or recommend content that contains misinformation about vaccinations on Instagram Explore or hashtag pages.
— We are exploring ways to share educational information about vaccines when people come across misinformation on this topic.
Also, YouTube will be showing users fact-checks (which it’s calling “information panels”) on topics that are “prone to misinformation,” BuzzFeed’s Pranav Dixit reported, though the feature is only available to some users in India right now and YouTube hasn’t said when it will expand it globally. And it’s unclear who precisely the fact-checkers are and whether they are being paid.
“Newspaper clippings and television news screen grabs (real or fake) were extensively shared.” The general election that India will hold this year is being described as its first WhatsApp election: Since 2014, when the last general election was held, WhatsApp usage has skyrocketed in the world’s largest democratic country: As of 2017, it had 200 million monthly active users in India, a figure that has certainly only grown since then (the company hasn’t released an updated figure).
Fake news shared on WhatsApp has led to mob violence and murders in India. When the BBC did an in-depth analysis of a group of Indian WhatsApp users in 2018, it found that the majority of messages shared within their private networks could be categorized either as “scares and scams” or “national myths.” The most common way that information is shared, the researchers found, was via images — “visual information, sometimes layered with a minimum amount of text.”This week, The Hindustan Times took a look at the messages shared in more than 2,000 public, politics-focused Indian WhatsApp groups during the 2018 state elections. Here’s reporter Samarth Bansal:
1. Newspaper clippings and television news screen grabs (real or fake) were extensively shared. Morphed ABP news screengrabs were the 2nd most shared misleading image in our data. (2/n) pic.twitter.com/CNmYRcopCi
— Samarth Bansal (@PySamarth) March 5, 2019
45% of over one million messages in our dataset were images (36%) and videos (9%). This analysis was restricted to the study of images. Here are methodological details.
Data collection is on, ideas/feedback welcome. (end)
Full story: https://t.co/2pDxIpx90H pic.twitter.com/v4jsfcVRBk— Samarth Bansal (@PySamarth) March 5, 2019
Doctored screenshots and news clippings are used to make the content seem more reputable:
Seven of the ten most shared misleading images in the pro-BJP WhatsApp groups were media clippings. The most shared image was a screengrab of a primetime segment of Times Now, an English TV news channel, claiming that the Congress party manifesto in Telangana was Muslim-centric. Seven “Muslim only” schemes were included in the manifesto, the image claimed, including a scholarship for Muslim students and free electricity to Mosques. Except that the information was misleading. Alt News, a left-leaning fact-checking news website, later debunked how the news channel had misreported the story, by selectively picking parts of the manifesto to create a false narrative.
This message repeatedly appeared in various forms — eight of the top ten misleading images in the BJP groups were only about the manifesto — including screen grabs from CNBC-Awaaz, another news channel, and standalone graphics.
The example illustrates a key point: “fake news” as commonly understood has various shades. Unlike the morphed ABP news screenshots (second most shared) that propagated outright lies, the Telangana manifesto story is based on partially-true information that was later found to be misleading. The intent in the latter case is not clear and often difficult to establish.
Why are there so many media clippings? One possible explanation for this phenomenon is that WhatsApp-ers leverage mainstream media artefacts to compensate for the declining credibility of WhatsApp content.