We live in a world where a man from North Carolina was inspired to drive to a D.C. pizza shop with an assault-style rifle to investigate what he believed to be a child sex ring that ultimately linked back to Hillary Clinton, based on a conspiracy theory
It’s a world where hoaxes that lead to real-life tragedies spread at an exponential pace from person to person on messaging apps like WhatsApp, and the platforms themselves by design can’t know the content of what’s being spread within these closed networks.
It’s a world where, since coming into office, the president of the United States has thrown out the term “fake news” hundreds of times to refer to an array of non-Fox News news organizations and reports he doesn’t like.
Current news coverage has been overwhelmingly focused on the intentionally-faked-news-articles aspect of the online news and information ecosystem. It’s been focused on hating on (not necessarily unfairly) — the platforms that have “broken democracy.”“I just would love to see a way of saying, this technology has already been built, it’s incredibly powerful, and with that power come really difficult conversations,” said Claire Wardle, who leads the research and practice group First Draft, which recently moved to Harvard University’s Shorenstein Center. “How can we, as a society, bring in people who have expertise in a number of areas — lawyers, ethicists, politicians, academics, technologists? How can we have these conversations together, instead of just being in our camps throwing insults at each other that no one is doing enough?”
How exactly can these organizations do more, and more importantly, what exactly are the problems that need to be addressed? In a report published last month with the Council of Europe, Wardle and her co-author Hossein Derakhshan recommend many concrete next steps for not just technology companies, news organizations, and governments, but also philanthropic foundations and educators. The report offers better categorizations for the tangle of bad information online that are much more specific than “fake news” and is a useful reference not just for journalists.
I spoke to Wardle about her ambitions for First Draft (such as a U.S.-wide hub for misinformation monitoring leading up to the 2018 midterm elections), what information spaces we should be paying more attention to (WhatsApp! Augmented reality!), and the simplistic and damaging catch-all term “f— news” that makes her so mad, but that news organizations reporting on the space mostly won’t retire. Our conversation is below, edited for length and clarity.
First Draft’s core strength is being really connected to the news industry and to the platforms. We also work closely with academics and research institutes who do good work in the space but don’t necessarily have access to practitioners. We try to bridge that. A lot of what we did in 2017, we’ll just continue to try to do at scale. Hopefully we’ll be able to bring in more funding to do more of it. That’s the plan for 2018.
We will be hiring students for the five months leading up to that. There will be just a mix of students who are smart and interested in this work, some of them will be from here, some of them will be from elsewhere, some of them will be recommended or have certain types of expertise.
That’s a big plan for 2018. The idea is that if we can get that model right, we can start scaling it globally in 2019. In 2017, we did a lot of these pop-up newsrooms around elections, which enabled us to test different technologies, tools, workflows, techniques. But it isn’t sustainable to keep doing these pop-ups, to keep training people and working with them for five to 10 weeks, stopping, and then moving on. The idea, we think, for the next five years is, how do we approach building a more permanent response to this type of monitoring and working with newsrooms?
There’s a lot of duplication in this space. The benefit of having centralized monitoring is that you’re keeping newsrooms from that duplication. You’re also bringing accountability. Different newsrooms are aware of disinformation, are aware of what to report and when to report.
We wrote a piece in September about how newsrooms should think about reporting on mis- and disinformation (“Can silence be the best response to mis- and dis-information?”). The idea is that if we can collaboratively verify, there is a way of cross-checking each other’s work that improves standards but also starts conversations about the best forms of reporting on this type of material.
And then there was work by Lisa Fazio that was funded through the Knight Prototype Fund. She’s finishing up experiments now and will present early findings in Paris next week as well. She was looking at the visual icon and the way the debunks were framed to understand whether or not the way they were framed caused more harm than good. How did people read these debunks; did the debunks do what we thought they would do? The combination of those sets of research mean that we’ll have a pretty substantial body of evidence to make decisions about how we do these projects in the future — not just about how we set them up, but specifically about how we design the debunks, how we involve audiences, how much of the process we make public. There are concerns I’ve spoken about in the past: We could just be saying, well, we’re throwing money at this, we’re doing these projects, they seem great, but we don’t know what impact they’re having on audiences.
In the U.S., the conversation is very political and based around the Facebook News Feed. We just want to make sure we’re not missing what’s ultimately going to hit the U.S. very quickly.
The other type of work we’re trying to do is understanding the types of reporting that we’re doing at the moment about disinformation, bot networks, hacking, how we don’t describe what we mean when we say “hacking.” We have concerns we’re inadvertently giving oxygen to, or doing the work of the agents of disinformation. These types of news stories get a lot of traffic. We know why they’re being written. But I worry about what the longer-term impact is of all the reporting that we’re doing right now. We hope to do some research on that as well.
I think for me, a question is what can we do to help platforms think through some of these really knotty issues. My fear is we’re becoming entrenched in certain attitudes, where we’re thinking that we just need platforms to admit they’re publishers. And then the platforms saying, but the regulatory framework hasn’t caught up yet…We’re just all stuck.
How can we think through supporting the platforms in what we should all agree is something we didn’t see coming? Well, some of us saw this coming; we didn’t see the complexities of these challenges.
When we think about these platforms, they’re not journalistic entities. The journalism community constantly gets frustrated at the platforms over how they think and talk about these issues. And these are really, really hard issues. Journalists have had to struggle, and still struggle every day, about what to publish and how to publish and the impacts of what they publish. If you asked The New York Times to lay out specifically why any given story makes it to the front page on any given day, I doubt they could spell out perfectly why for every single story.
I just would love to see a way of saying, this technology has already been built, it’s incredibly powerful, and with that power comes really difficult conversations. How can we as a society bring in people who have expertise in a number of areas — lawyers, ethicists, politicians, academics, technologists? How can we have these conversations together, instead of just being in our camps throwing insults at each other that no one is doing enough? I would love to see Shorenstein be a place we can start having these conversations more thoughtfully.
First Draft does have a way of working a number of different groups globally. If we’re going to move forward in this space, we need these kind of conversations, and these conversations need to be brokered and facilitated by an organization that doesn’t have any skin in the game.
The relationships we have with the people at these platforms means that our work is better informed. We’re aware of their challenges. We’re able to have honest conversations about things we’d like to see happen. People at the platforms know some of these things already, but trying to make them happen is a challenge. Seeing the way the conversation has shifted in the past six months — there’s been a real awakening to what’s been happening on their platforms. I think 2018 will be the year the needle moves such that there is a full recognition that “we have to do something.” I’m hopeful there will be a space for conversations we wouldn’t have had a year ago.
Augmented reality is going to become a huge issue — the use of artificial intelligence to automate manipulation technologies, or to do some of the heavy lifting in terms of monitoring. There are issues right now around what the platforms are potentially doing in terms of de-ranking content that I’d like to see more transparency around. We need, as many say, algorithmic accountability and transparency.
When I think about the challenges for the next two years, it’s going to be a mixture of new technologies and how manipulation and disinformation work on those platforms and through those technologies. The responses to those challenges are going to rely increasingly on automation. So how do we ensure we’re aware of what those responses are?
We need to test things. But I worry technology is moving at a speed that, as we’re responding to it and just as we get our hands around it, other things are going to bite us on the bum. We don’t have a single institution big enough to fight this. There’s a lot of really good people doing really great work, but it feels like many separate and disparate groups trying to do their bit.
I refuse to use it now. It will not come out of my mouth. It feels like a swear word. We have to be clear. Are we talking about disinformation? Are we talking about misinformation? Are we talking about pollution? Are we talking about propaganda?