The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This roundup offers the highlights of what you might have missed.
Parler will be nicer, but only on iOS. In March Apple blocked Parler, the “free speech” app that has become a haven for far-right extremists and conspiracy theorists, from the App Store after initially removing it in January following the Capitol riots. A letter that Apple sent to Parler at the time “included several screenshots to support the rejection,” Bloomberg reported, including “user profile pictures with swastikas and other white nationalist imagery, and user names and posts that are misogynistic, homophobic and racist.”
As of Monday, Parler’s app is back in the App Store, but what you’ll see on its iOS app is different from what you’ll see on its website or other smartphones (Parler is still banned from Google Play, but can be side-loaded onto Android phones from Parler’s site). From The Washington Post’s Kevin Randall:
Posts that are labeled “hate” by Parler’s new artificial intelligence moderation system won’t be visible on iPhones or iPads. There’s a different standard for people who look at Parler on other smartphones or on the Web: They will be able to see posts marked as “hate,” which includes racial slurs, by clicking through to see them.
Parler has resisted placing limits on what appears on its social network, and its leaders have equated blocking hate speech to totalitarian censorship, according to Amy Peikoff, chief policy officer. But Peikoff, who leads Parler’s content moderation, says she recognizes the importance of the Apple relationship to Parler’s future and seeks to find common ground between them. […]
Parler is still pressing Apple to allow a function where users can see a warning label for hate speech, then click through to see it on iPhones. But the banning of hate speech was a condition for reinstatement on the App Store.
Parler will block hate speech on iphones, but not other phones or the web.
Increasingly thinking this is the future of the internet: we'll all be looking at different things in the same place. More echo chambers, not less.https://t.co/p7upN7Em5b
— evelyn douek (@evelyndouek) May 17, 2021
Really interesting type of granularity emerging here: automated moderation at an infrastructure level. There have been localized speech limitations (ie Twitter’s “Nazi bit” for certain regions) but I think this is the first to differentiate like this? https://t.co/UImFS9ifhQ
— Renee DiResta (@noUpside) May 17, 2021
Also, Parler is getting trending topics.
The nypost again “fake news”. They have a story on peacocks today and say I have sixteen on my farm I actually have 21 of these glorious birds whose house is impeccable. They do not smell. They are so clean! Their voices are loud but such fun to hear. They are so friendly
— Martha Stewart (@MarthaStewart) May 16, 2021
Christian Staal Bruun Overgaard and Natalie (Talia) Jomini Stroud surveyed 1,010 U.S. adults in August 2020. They found that “Americans’ perceptions of hot-button issues are largely driven by partisanship.”
Respondents were presented with four statements and asked to rate whether each one was “Definitely true,” “probably true,” “unsure,” “possibly false,” or definitely false. Here are the statements and correct answers:
— Russia tried to interfere in the 2016 presidential election. (True)
— Since February 2020, the flu has resulted in more deaths than the coronavirus. (False)
— Trump failed to send U.S. health experts to China to investigate coronavirus. (False)
— It is illegal to mail ballots to every registered voter. (True)
Age and education levels were correlated with giving correct answers to some of the questions — older people and people with at least a bachelor’s degree were more likely to correctly asses the statements about Russia’s interference in the 2016 election and flu deaths. But “partisanship turned out to be the strongest predictor of Americans’ knowledge, even surpassing education.” Democrats were more likely to rate statements favoring their political party as true; Republicans were more likely to rate statements favoring their political party as true.
When evaluating the statement — mostly congenial to Democrats — regarding Russia’s interference with the 2016 U.S. presidential election, almost nine in ten (87.4%) Democrats correctly said it was “probably true” or “definitely true,” whereas fewer than half (48.5%) of Republicans said so.
For the two false statements, partisans’ responses were closely related to their political preferences. For the statement claiming that the flu had resulted in more deaths since February than the coronavirus, close to seven in ten (65.8%) Democrats correctly labeled it as “probably false” or “definitely false,” whereas fewer than four in ten (34.6%) Republicans did so.
Conversely, for the statement asserting that Trump had failed to send U.S. health experts to China to investigate the coronavirus, almost half (49.5%) of the Republicans correctly labeled the statement as “probably false” or “definitely false,” whereas fewer than one in ten (6.9%) Democrats gave these responses.
When evaluating a true statement — congenial to Republicans — which correctly said that it is illegal to mail ballots to every registered voter in the U.S., fewer than one in ten (7.7%) Democrats answered “probably true” or “definitely true,” whereas just over a quarter (25.8%) of Republicans gave these answers.
The study is here.
How news organizations fought misinformation during the pandemic. As part of a larger American Press Institute report called “How local news organizations are taking steps to recover from a year of trauma,” a report for the American Press Institute, Jane Elizabeth takes a look at news orgs’ efforts to fight misinformation during the pandemic — locally:
Mahoning Matters in Ohio wanted to debunk a viral conspiracy about antifa groups looting the local Wal-Mart, so they actually went to the Wal-Mart and showed on Facebook Live that there was no antifa, no looting. “Instead of just reporting about this as a misinformation trend, we went out there and dispelled the rumors,” says former publisher Mandy Jenkins. “We can do that with every story. We’re local.”
Back in March 2020, when there was only one confirmed coronavirus case in Arizona, The Tucson Sentinel decided to jump proactively into a potential pit of conspiracies and lies: Facebook. “It’s important to challenge [misinformation] right where it happens,” says Dylan Smith, the Sentinel’s editor and publisher, so the Tucson Coronavirus Updates Facebook group was launched …
The Sentinel team set up guidelines and rules for participating in its Facebook group, and designated administrators and monitors — comprised of volunteers from the community as well as Sentinel staff — to keep the conversations in check. “Too many newsrooms try to fix social media disasters after the train’s already run off the trestle and exploded on the rocks below,” says Smith. “That never works.”
Importantly, the Sentinel set a limit on participation in the Facebook group: Users must be local residents. “By restricting membership to those people who actually live in the Tucson area, we’ve eliminated a lot of drive-by trolls, and while we haven’t had to ban too many people or even mute them, we don’t hesitate if there’s someone who’s not there to participate in good faith,” says Smith.
[In] West Virginia, Black by God, a local startup for Black residents, recognized that the lack of trustworthy information in the community left it wide open for misinformation — an issue examined in a project supported by the Lenfest Institute and a study published by the Harvard Kennedy School in January. Journalist Crystal Good of Charleston, W.Va., launched the Black by God Substack newsletter and a website in part to help improve “political literacy” and the lack of access to COVID-19 data in diverse communities.