The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.
Oh, Facebook. The big story right now is obviously Cambridge Analytica (we have roundups of the news here and here), but in an interview all about that with The New York Times this week, Facebook CEO Mark Zuckerberg also said a few things about fake news (or false news, as Facebook prefers to call it). First:
Take things like false news. You know, a lot of it is really spam, if you think about it. It’s the same people who might have been sending you Viagra emails in the ’90s, now they’re trying to come up with sensational content and push it into Facebook and other apps in order to get you to click on it and see ads. There are some pretty basic policy decisions we’ve made, like O.K., if you’re anywhere close to being a fake news site, you can’t put Facebook ads on your site, right? So then suddenly, it becomes harder for them to make money. If you make it hard enough for them to make money, they just kind of go and do something else.
Second:
One of the things that gives me confidence is that we’ve seen a number of elections at this point where this has gone a lot better. In the months after the 2016 election, there was the French election. The new A.I. tools we built after the 2016 elections found, I think, more than 30,000 fake accounts that we believe were linked to Russian sources who were trying to do the same kind of tactics they did in the U.S. in the 2016 election. We were able to disable them and prevent that from happening on a large scale in France.
In last year, in 2017 with the special election in Alabama, we deployed some new A.I. tools to identify fake accounts and false news, and we found a significant number of Macedonian accounts that were trying to spread false news, and were able to eliminate those. And that, actually, is something I haven’t talked about publicly before, so you’re the first people I’m telling about that.
Craig Silverman, Jane Lytvynenko, and Lam Thuy Vo wrote this week for BuzzFeed about sketchiness in Facebook Groups. Facebook is pushing Groups hard as a way of promoting “meaningful interactions,” and while it seems as if the Groups experience for most users is positive (I LOVE my local Facebook moms’ group, it’s the main reason I’m not quitting Facebook), groups seem to be as subject to spammers, hackers, and trolls as everything else on Facebook is — and will likely get worse as Facebook makes them a bigger part of its strategy.
“A binary signal of whether a source is trusted or not is absurd.” Campbell Brown, Facebook’s head of news partnerships, debated Richard Gingras, Google’s senior director of news and social products, at the Financial Times’ Future of News event Thursday in New York. I now have the actual transcript of their conversation (note: the transcript was provided to me by Facebook) so just sticking that here:
Campbell Brown: I don’t think we can take such a hands off approach. I do think this is a fantastic place where the platforms can collaborate with publishers, with people like Emily [Bell], to try to identify signals that we can all have consensus on. There are some obvious ones — does a news organization have a corrections policy when they make a mistake? Who is the news organization, who works there, their bios, how long have they been around?
Matthew Garahan (FT): Some sort of accreditation system?
Brown: I think we are moving in that direction. I don’t know where we’ll land [Richard talking over].
Richard Gingras: From a First Amendment perspective we don’t want anyone accrediting who a journalist is, but to that point, for instance, one thing that we did is work with the industry on the Trust Project, which is how do organizations themselves do a better job of presenting who they are, what they’re about, their expertise, do they have those correction policies… [continues]
Brown: Having been a journalist for more than 20 years, I don’t think anyone can call themselves a journalist. I don’t think we should say, anyone who says I’m a journalist …
Gingras: I would be very reluctant to live in a society where, like, who decides who decides who a journalist is. But my point is, I want to raise a good example here. Trinity Mirror in the UK went and implemented all of the recommendations of the Trust Project…
[later]
Brown: That’s exactly what I’m talking about, there are signals, and the Trust Project is working on them. And that’s exactly what I’m talking about, is I do think this is an area where potentially we can collaborate and we should collaborate, as opposed to everybody working in their own little bubble to try to figure this out. This is a problem that’s going to impact us all, as I said before, given the direction where fake news is going, it’s going to impact publishers, it’s going to impact newsrooms, it’s going to affect platforms, and if we don’t work together, we’re not going to have a lot less success in addressing it.
to clarify – i do think fake news may be pushing us into a world where we have to verify news orgs through quality signals or other means.
— Campbell Brown (@campbell_brown) March 22, 2018
Very interesting moment: Facebook says it will judge the quality of sources; Google says it does not. Except I believe they both will. And, no, I don't call that editorial and don't think that makes them into media companies. It makes them responsible companies. #FTFutureNews
— Jeff Jarvis (@jeffjarvis) March 22, 2018
NYT's @deanbaquet says "it’s hard for me not to think of what Facebook does as editing.” It's not how he edits at the Times, assigning stories, etc. “But they make decisions about what people get to see and I think that’s a form of editing." #FTFutureNews
— Michael Calderone (@mlcalderone) March 22, 2018