Nieman Foundation at Harvard
HOME
          
LATEST STORY
Journalists fight digital decay
ABOUT                    SUBSCRIBE
Nov. 6, 2017, 11:27 a.m.
Audience & Social

The scale of misinformation online is global. First Draft is pushing for more collaboration — and more research — as an antidote

“How can we have these conversations together, instead of just being in our camps throwing insults at each other that no one is doing enough?”

We live in a world where a man from North Carolina was inspired to drive to a D.C. pizza shop with an assault-style rifle to investigate what he believed to be a child sex ring that ultimately linked back to Hillary Clinton, based on a conspiracy theory

It’s a world where hoaxes that lead to real-life tragedies spread at an exponential pace from person to person on messaging apps like WhatsApp, and the platforms themselves by design can’t know the content of what’s being spread within these closed networks.

It’s a world where, since coming into office, the president of the United States has thrown out the term “fake news” hundreds of times to refer to an array of non-Fox News news organizations and reports he doesn’t like.

Current news coverage has been overwhelmingly focused on the intentionally-faked-news-articles aspect of the online news and information ecosystem. It’s been focused on hating on (not necessarily unfairly) — the platforms that have “broken democracy.”

“I just would love to see a way of saying, this technology has already been built, it’s incredibly powerful, and with that power come really difficult conversations,” said Claire Wardle, who leads the research and practice group First Draft, which recently moved to Harvard University’s Shorenstein Center. “How can we, as a society, bring in people who have expertise in a number of areas — lawyers, ethicists, politicians, academics, technologists? How can we have these conversations together, instead of just being in our camps throwing insults at each other that no one is doing enough?”

How exactly can these organizations do more, and more importantly, what exactly are the problems that need to be addressed? In a report published last month with the Council of Europe, Wardle and her co-author Hossein Derakhshan recommend many concrete next steps for not just technology companies, news organizations, and governments, but also philanthropic foundations and educators. The report offers better categorizations for the tangle of bad information online that are much more specific than “fake news” and is a useful reference not just for journalists.

I spoke to Wardle about her ambitions for First Draft (such as a U.S.-wide hub for misinformation monitoring leading up to the 2018 midterm elections), what information spaces we should be paying more attention to (WhatsApp! Augmented reality!), and the simplistic and damaging catch-all term “f— news” that makes her so mad, but that news organizations reporting on the space mostly won’t retire. Our conversation is below, edited for length and clarity.

Shan Wang: Now that you’ve moved, literally, I’m wondering what changes for First Draft. The larger mandate isn’t changing, but are you working differently? Will research, and broader types of research than you were doing before, be an even bigger component?

Claire Wardle: As you say, nothing substantially is changing. But we went from being quite a niche nonprofit to, all of a sudden, being in the spotlight for the issue du jour. Coming here allows us to think about scaling in a way that, for a small nonprofit based out of somebody’s living room, was just not possible. Having the support of an institution like Harvard, being able to work with researchers across the university, being able to work with graduate students, makes a huge difference. And then there’s having a global brand like Harvard, which helps people know about us in ways that previously it was hard to do with a tiny staff and without those support mechanisms.

Wang: What new types of research are you adding to First Draft’s plate, now that you have a full university of academics to mine?

Wardle: From the very beginning, a big part of what First Draft has done is run real projects when we’re trying to test hypotheses. Something like CrossCheck was us trying something for 10 weeks, based on some hunches. Then we test it afterwards.

First Draft’s core strength is being really connected to the news industry and to the platforms. We also work closely with academics and research institutes who do good work in the space but don’t necessarily have access to practitioners. We try to bridge that. A lot of what we did in 2017, we’ll just continue to try to do at scale. Hopefully we’ll be able to bring in more funding to do more of it. That’s the plan for 2018.

Wang: Part of an initial burst of funding from Knight announced last month mentioned a “learning lab,” where insights would come from researchers and graduate students testing ways to address misinformation. What is that lab working on, and will these be students and researchers across campus here?

Wardle: The U.S. midterms are going to be a big point for us in 2018. The test we’ll be doing there is to kind of build out a hub for the five months before the election, where we will be monitoring disinformation, connecting with newsrooms around the country. It will be learning lessons from CrossCheck; what does this look like in the U.S. context?

We will be hiring students for the five months leading up to that. There will be just a mix of students who are smart and interested in this work, some of them will be from here, some of them will be from elsewhere, some of them will be recommended or have certain types of expertise.

That’s a big plan for 2018. The idea is that if we can get that model right, we can start scaling it globally in 2019. In 2017, we did a lot of these pop-up newsrooms around elections, which enabled us to test different technologies, tools, workflows, techniques. But it isn’t sustainable to keep doing these pop-ups, to keep training people and working with them for five to 10 weeks, stopping, and then moving on. The idea, we think, for the next five years is, how do we approach building a more permanent response to this type of monitoring and working with newsrooms?

There’s a lot of duplication in this space. The benefit of having centralized monitoring is that you’re keeping newsrooms from that duplication. You’re also bringing accountability. Different newsrooms are aware of disinformation, are aware of what to report and when to report.

We wrote a piece in September about how newsrooms should think about reporting on mis- and disinformation (“Can silence be the best response to mis- and dis-information?”). The idea is that if we can collaboratively verify, there is a way of cross-checking each other’s work that improves standards but also starts conversations about the best forms of reporting on this type of material.

Wang: And you had been doing some research about potential backfire effects for these verification efforts, like the CrossCheck pop-up in France for those elections over the summer. What’s come out of that research?

Wardle: The research that was undertaken by a team of three French researchers is going to be published November 16. It’s interviews with journalists who took part in the project and audience research. That has come back overwhelmingly positive in a way that we had to actually ask the researchers to look for more negative responses — surely somebody hated us? Of course there’s always self-selection in these kinds of research methodologies. But hopefully CrossCheck in France will continue in 2018, because there’s such enthusiasm in the project from the French journalists.

And then there was work by Lisa Fazio that was funded through the Knight Prototype Fund. She’s finishing up experiments now and will present early findings in Paris next week as well. She was looking at the visual icon and the way the debunks were framed to understand whether or not the way they were framed caused more harm than good. How did people read these debunks; did the debunks do what we thought they would do? The combination of those sets of research mean that we’ll have a pretty substantial body of evidence to make decisions about how we do these projects in the future — not just about how we set them up, but specifically about how we design the debunks, how we involve audiences, how much of the process we make public. There are concerns I’ve spoken about in the past: We could just be saying, well, we’re throwing money at this, we’re doing these projects, they seem great, but we don’t know what impact they’re having on audiences.

Wang: What are some new research questions on the audience side of fact-checking and debunking that you’re looking into?

Wardle: We want to specifically look at the impacts of some of these debunking efforts on WhatsApp in India, Kenya, and South Africa. We’re seeking funding at the moment for doing this kind of work. We’re really concerned about closed messaging apps, particularly non-political misinformation, such as rumors around health and science and how that’s playing out in different parts of the world.

In the U.S., the conversation is very political and based around the Facebook News Feed. We just want to make sure we’re not missing what’s ultimately going to hit the U.S. very quickly.

The other type of work we’re trying to do is understanding the types of reporting that we’re doing at the moment about disinformation, bot networks, hacking, how we don’t describe what we mean when we say “hacking.” We have concerns we’re inadvertently giving oxygen to, or doing the work of the agents of disinformation. These types of news stories get a lot of traffic. We know why they’re being written. But I worry about what the longer-term impact is of all the reporting that we’re doing right now. We hope to do some research on that as well.

Wang: I’ve been making my way through your and Hossein Derakhshan’s substantial report and wondering about the concrete recommendations you guys make at the end. Do you have a plan to work with policymakers, lawmakers, to get some of that passed? Being at Shorenstein I suspect might help with that, bumping into the actual policymakers.

Wardle: We’ve been pretty overwhelmed by the positive response to the report from journalists, policymakers, and academics. My hope is that because we now have this institutional home that allows us to connect with people who would’ve been much harder to connect with previously, we might start building up momentum around some of these recommendations.

I think for me, a question is what can we do to help platforms think through some of these really knotty issues. My fear is we’re becoming entrenched in certain attitudes, where we’re thinking that we just need platforms to admit they’re publishers. And then the platforms saying, but the regulatory framework hasn’t caught up yet…We’re just all stuck.

How can we think through supporting the platforms in what we should all agree is something we didn’t see coming? Well, some of us saw this coming; we didn’t see the complexities of these challenges.

When we think about these platforms, they’re not journalistic entities. The journalism community constantly gets frustrated at the platforms over how they think and talk about these issues. And these are really, really hard issues. Journalists have had to struggle, and still struggle every day, about what to publish and how to publish and the impacts of what they publish. If you asked The New York Times to lay out specifically why any given story makes it to the front page on any given day, I doubt they could spell out perfectly why for every single story.

I just would love to see a way of saying, this technology has already been built, it’s incredibly powerful, and with that power comes really difficult conversations. How can we as a society bring in people who have expertise in a number of areas — lawyers, ethicists, politicians, academics, technologists? How can we have these conversations together, instead of just being in our camps throwing insults at each other that no one is doing enough? I would love to see Shorenstein be a place we can start having these conversations more thoughtfully.

First Draft does have a way of working a number of different groups globally. If we’re going to move forward in this space, we need these kind of conversations, and these conversations need to be brokered and facilitated by an organization that doesn’t have any skin in the game.

Wang: Are there certain platforms that seem to be much more receptive to having the difficult conversations than others or feel genuinely about the magnitude and seriousness of this issue as you and your team does? I guess this is hard to say about a platform company that has thousands of staff.

Wardle: These platforms are huge organizations, yes. We have good working relationships with people at all of the different platforms. Because of the conversations and the fact that they come to our meetings and convenings, we’re much more aware of some of the challenges that the platforms are facing.

The relationships we have with the people at these platforms means that our work is better informed. We’re aware of their challenges. We’re able to have honest conversations about things we’d like to see happen. People at the platforms know some of these things already, but trying to make them happen is a challenge. Seeing the way the conversation has shifted in the past six months — there’s been a real awakening to what’s been happening on their platforms. I think 2018 will be the year the needle moves such that there is a full recognition that “we have to do something.” I’m hopeful there will be a space for conversations we wouldn’t have had a year ago.

Wang: In your report, you really highlight closed messaging apps as platforms we’re not fully grappling with. Are there other spaces we haven’t properly interrogated in this focus on Facebook — well mostly Facebook I think, some Google and Twitter?

Wardle: There’s a lot of emphasis on WhatsApp. They have huge reach. If we look at the range of different closed messaging apps, we need to be looking at all of that.

Augmented reality is going to become a huge issue — the use of artificial intelligence to automate manipulation technologies, or to do some of the heavy lifting in terms of monitoring. There are issues right now around what the platforms are potentially doing in terms of de-ranking content that I’d like to see more transparency around. We need, as many say, algorithmic accountability and transparency.

When I think about the challenges for the next two years, it’s going to be a mixture of new technologies and how manipulation and disinformation work on those platforms and through those technologies. The responses to those challenges are going to rely increasingly on automation. So how do we ensure we’re aware of what those responses are?

We need to test things. But I worry technology is moving at a speed that, as we’re responding to it and just as we get our hands around it, other things are going to bite us on the bum. We don’t have a single institution big enough to fight this. There’s a lot of really good people doing really great work, but it feels like many separate and disparate groups trying to do their bit.

Wang: You really, really, really hate the term fake news. What other terms should we stop using in reporting on the misinformation, disinformation problem?

Wardle: I am so mad about this! I just cannot believe that people think it’s OK to use this term! There’s some good research that the Reuters Institute put out about this. People using this term globally — when I say people, we know who I mean — are using it against the news industry, and it’s working. People are saying they think the mainstream media is peddling misinformation. The fact that journalists, academics, policymakers keep using it, argh! It’s a term that everybody understands. But there are other words we as a society do not use, because we know they’re damaging. It’s astonishing we use this term and think it’s fine when we just put it in bunny ears [quotation marks]. It’s not fine! It’s really not fine! I don’t know how to stop it!

I refuse to use it now. It will not come out of my mouth. It feels like a swear word. We have to be clear. Are we talking about disinformation? Are we talking about misinformation? Are we talking about pollution? Are we talking about propaganda?

Photo taken at a #Pizzagate conspiracy protest, by Blink O’fanaye. Used under a Creative Commons license.

POSTED     Nov. 6, 2017, 11:27 a.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
Journalists fight digital decay
“Physical deterioration, outdated formats, publications disappearing, and the relentless advance of technology leave archives vulnerable.”
A generation of journalists moves on
“Instead of rewarding these things with fair pay, job security and moral support, journalism as an industry exploits their love of the craft.”
Prediction markets go mainstream
“If all of this sounds like a libertarian fever dream, I hear you. But as these markets rise, legacy media will continue to slide into irrelevance.”