Happy anniversary, Facebook: Snopes quit your fact-checking partnership.
Poynter’s Daniel Funke reported Friday that Snopes has pulled out of the third-party debunking squad Facebook enlisted in 2016. The Associated Press is not currently fact-checking for it either (but apparently hasn’t fully quit), TechCrunch reported.
Snopes, the 25-year-old fact-checking site, said Facebook’s system was too manual — not automated enough — for the 16-person organization. “With a manual system and a closed system — it’s impossible to keep on top of that stuff,” Snopes’ VP of operations Vinny Green told Poynter. “It doesn’t seem like we’re striving to make third-party fact checking more practical for publishers — it seems like we’re striving to make it easier for Facebook. At some point, we need to put our foot down and say, ‘No. You need to build an API.'”
(Snopes has its own leadership troubles, which it counts as another reason to focus more fully on its own fact-check work rather than the Facebook partnership.)
My colleague Laura Hazard Owen explained how the fact-checking dashboard operated in 2016:Because the group of third-party fact-checkers is small at launch, and as part of its effort to focus on the highest-impact “worst of the worst,” Facebook is doing some sorting before the reported stories go to the fact-checkers. Its algorithm will look at whether a large number of people are reporting a particular article, whether or not the article is going viral, and whether the article has a high rate of shares. Facebook has also already had a system in place, for about a year, that uses signals around content (such as how people are responding to it in comments) to determine whether that content is a hoax.
Last year, Facebook explained a little more about how the process works:
The dream team in 2016 began with Snopes, PolitiFact, Factcheck.org, ABC, and the AP, and now has 34 members in countries around the world. But recent skirmishes emerged between partisan news outlets ThinkProgress and The (no longer) Weekly Standard, highlighting some issues with the platform’s approach. An AP spokesperson told TechCrunch that they “fully expect to be doing fact check work for Facebook in 2019” but that the company is still in talks with Facebook about what that looks like. PolitiFact and AFP confirmed they are staying on, and a Facebook spokesperson said they’re confident in their approach and plan to expand the fact-checking partnership with more members and languages this year.
- We use technology to identify potentially false stories. For example, when people on Facebook submit feedback about a story being false or comment on an article expressing disbelief, these are signals that a story should be reviewed. In the US, we can also use machine learning based on past articles that fact-checkers have reviewed. And recently we gave fact-checkers the option to proactively identify stories to rate.
- Fact-checkers provide a rating and reference article.Independent third-party fact-checkers review the stories, rate their accuracy and write an article explaining the facts behind their rating.
- We demote links rated false and provide more context on Facebook. If a story is rated false, we reduce its distribution in News Feed. (See more on how News Feed ranking works.) We let people who try to share the story know there’s more reporting on the subject, and we notify people who shared it earlier. We also show the fact-checker’s reference article in Related Articles immediately below the story in News Feed.
The partnership was extremely costly for our organization, and money can't fix all the concerns we brought up. There was not an amount of $ that Facebook could have offered that fixed some of the concerns we brought up. The terms were not suitable, we needed to announce it.
— Vinny @ Snopes (@vinnysgreen) February 1, 2019
Some of the fact-checking partners are happy with the way the program has turned out, though it hit bumps rolling out just after the 2016 election. (The fact-checkers are paid — Snopes received $100,000 from Facebook in 2017 for the work; France’s Libération got $100,000 in 2017 and $245,000 in 2018.) But you’d think if Facebook really wanted to make a dent, they’d attack misinformation by publishers, not by individual posts (misinformation travels faster than factchecks, etc.) Laura brought that up in 2016:
The key issue and possible pain point, which isn’t addressed in the changes Facebook outlined Thursday, is that reporting happens on a per-post level, rather than on the publisher level. Since Facebook is focusing specifically on “clear hoaxes spread by spammers” here, it seems as if it would be more efficient to simply block known hoax news sites like The Denver Guardian. But that seems to be more of a blanket approach than Facebook is willing to take at this point, and it would likely open the company up to a great deal of backlash.
At any rate, opinions are divided about how effective the program has actually been.
In my opinion, the fact checking partnerships were always PR, because it’s the kind of well-understood, visible intervention that journalists can see and cover.
The really effective product changes are often invisible and only seem tangentially related (fake account detection).
— Alex Stamos (@alexstamos) February 1, 2019
Yes, the partnership was launched as part of a PR strategy. I heard a senior FB official flippantly say as much in private. And that's why it began in December 2016 (one month after the US Presidential elections) and not in April 2016 (one month before the Filipino ones).
— Alexios (@Mantzarlis) February 2, 2019
Leave a comment