The big tech companies have long clung to the idea that their platforms are driven solely by the bias-free, objective realities of their algorithms. But of course the truth is more complicated: Behind those algorithms are people, and, when it comes to policing hate speech, the only thing consistent, it seems, is inconsistency.
Back in June, ProPublica published an in-depth investigation into the algorithms that Facebook uses to determine what it considers “hate speech” on its platform. The findings, which were part of ProPublica’s coverage of what it calls “algorithmic injustice,” were clear, if a little disheartening: Facebook’s rules were being inconsistently applied, creating issues both for people abused without recourse and people whose posts had been taken down with little explanation or room for appeal.
ProPublica, eager to learn more about these secret algorithms, is using its own technology to help. Earlier this week, the organization released its first Facebook Messenger bot, which is designed to collect stories about people’s experiences with hate speech on Facebook. Using Messenger’s conversational interface, the bot asks users questions such as “Did you experience hate speech” and “Did Facebook delete the post?” Users are also able to share the exact wording of the comments in question and direct screenshots.
Julia Angwin, senior reporter at ProPublica, explained that the bot is designed to help quantify the organization’s reporting on how Facebook is enforcing its rules. All information that readers submit (which is anonymized) is entered into a structured database. This will help ProPublica determine how Facebook handled, say, all content that it considered to be a “call to violence,” and what patterns — if any — there are in its responses.
“The goal here is to figure out how are these rules actually being implemented,” Angwin said. “What does it mean for people who have written a post that they think is fine, but that someone has flagged as hate speech? Does all of this fit within our conceptions of what hate speech is?”
The Messenger bot is primarily a way for people to share their stories with ProPublica’s reporters, but it also doubles as “another form of storytelling,” said Ariana Tobin, an engagement reporter at ProPublica. While there have been some high-profile stories about people that Facebook uses to moderate its platform (such as Adrien Chen’s January piece in The New Yorker), many Facebook users don’t understand the human element of Facebook’s moderation decisions. “We see this as a way to get the reporting back out there and give it to another audience that can make use of it and react to it,” Tobin said.
The use of Facebook, too, was important to the experiment. While news organizations looking to collect stories from readers en masse have always had the option of using tools from SurveyMonkey or Google Forms, ProPublica wanted to use Facebook for this experiment because it was the best place to collect stories about people’s experiences with the platform. “If you’re looking to talk to a community, it’s more powerful if you can go right where they already are,” said Tobin. On the other hand, the organization realized that some users interested in telling their stories may be uncomfortable using Facebook (or have been banned from the platform outright), so its developers created a regular survey form as well.
The bot, the first of its kind for ProPublica, came with a handful of new challenges and questions for the organization. Early experimentation on how “chatty” the bot should be resulted in a product that is “friendly” but not too verbose. Another issue, which ProPublica hasn’t yet solved, is promotion. While the organization has promoted the bot on its own site, via email, and through Twitter, “the discoverability factor is still hard” when it comes to Facebook bots, said Tobin. Using Facebook Messenger to communicate directly with a news organization’s reporters is still a new idea for most Facebook users, which is why ProPublica has experimented with changing its Facebook page’s header image to both advertise the bot and illustrate exactly where users need to click on the page to share their stories.
ProPublica is considering more messaging-based projects this year, with plans to experiment with other chat apps as well as SMS. One of the big open questions is how much any of these experiments will entice readers, both in the initial phases and once the novelty factor of the conversational interface wears off.
And then there are the trolls. Giving some users the opportunity to share their stories with readers opens up the door for troublemakers to do the same. But so far, ProPublica hasn’t seen much of it: Off the 100 or so responses the organization saw on the first day (many of which Tobin said were “really good”), only a few of them came from people attempting to cause trouble. It’s not clear how much that ratio will change as news of the project reaches more people.
Facebook, for its part, has taken steps to improve its transparency, at least at the margins. Last week it said that it hired Liz Spayd, former public editor of The New York Times, to help it become more open and transparent about its decisions about hate speech, fake news, terrorism, other “hard questions.” (Facebook did not respond to a request for comment about the ProPublica bot in time for publishing.)
But the company has largely been quiet on the issues that ProPublica has raised in its reporting, which is why Angwin says the work is so vital to raising awareness. “This is a very important issue, and Facebook doesn’t have an appeals process for users affected by it. Their decisions are opaque and not explained. We’re acting as that appeals process.”