A lot of research into fake news thus far has focused on the spread of content on social media platforms (namely Twitter) or has put people into research settings where they, for instance, read mock fake news articles. But what if you could actually look at how people interact with news as they do it, naturally?
“If you can attach fact checking to certain articles, how do you present that in a way that people don’t feel like somebody is preaching at them?” said Katharina Borchert, the chief innovation officer at Mozilla and, previously, the CEO of Spiegel Online.
Mozilla is strengthening its position in the fight against fake news with a new program — the Mozilla Information Trust Initiative — that focuses on building new products, teaching media literacy, doing original research, and running “creative interventions,” including providing funding to startups and holding events.
Part of that will mean opt-in studies with Firefox’s massive user base. I spoke with Borchert about how the initiative will work and what kinds of research she’d like to see.
Laura Hazard Owen: Misinformation and fake news are obviously problems that a lot of organizations are trying to tackle. How did you look at what other places were doing to decide the best ways for you to get involved?
Katharina Borchert: This is not the only way we’re involved. We’ve been partners from the very beginning in the
CUNY News Integrity Initiative alongside the Knight Foundation and Google and Facebook and others, and I think there are many great initiatives and projects out there already. We support
Misinfocon. But we have several initiatives ongoing internally as well. We were looking for more collaborators, and it felt like we needed to make this more of a formal thing, to be able to talk about it and highlight all of the different approaches.
We think we have a pretty unique opportunity in the research angle. We have a large user base with Firefox. We know that when we set up studies, people are willing to opt in and share their data with us to help us better understand certain issues.
Owen: The release mentions that Mozilla will be developing new products, as well as building on existing products like
Pocket,
Firefox Focus, and
Coral. Can you give me more specific details on what those products will be and how you’re thinking about misinformation in the context of the existing products?
Borchert: We’ve played around with several ideas internally. It doesn’t come as a big surprise that none of us really think there is a silver bullet or one easy technology solution to this. The problem is far too complex for that. I don’t have a great surprise product in the pipeline that will solve all of our problems once and for all.
But I think Coral, for example, plays an important role in building reader communities and allowing newsrooms to better surface quality user content and quality commentary. At Spiegel, the first instances of a massive influx of fake news that we saw — before we saw it in social media — was in comments on Spiegel Online. We became aware of Russia Today, for example, because it was massively pushed in our reader comments. While Coral and the commenting space are a bit adjacent, they’re important.
Pocket teaches us a lot about what kind of content and news sources are out there, what are people using and sharing. Instead of just looking at how you can make fake news go away, there’s the opposite approach: How do you alleviate the problem by surfacing quality content versus fake news, misinformation, and propaganda? Pocket can be a great tool for that. It can power much better content recommendations than we currently see. That is one of the more long-term projects that we have.
Last but not least, we hope that people make much more use, together with us, of opportunities like testing out quick ideas through browser add-ons or web extensions. You can build those really quickly, in a weekend hackathon for example, and test some ideas and assumptions. We’ve built a pretty cool product testing pipeline within Firefox called Test Pilot, so if there are interesting new browser features that we come up with internally or that other people have that might tackle this problem, we have the opportunity to test those out with hundreds of thousands of users in Test Pilot before they become fully fledged browser features.
This is a lot about inviting new ideas in and inviting outside perspective in. Mozilla is an open source company, so we really value the brain power out there. We don’t think that this is something that any of us can tackle by ourselves.
Owen: We’ve seen a number of
browser extensions that try to fight fake news, but they require user initiative and don’t seem like a broad solution. Are there other kinds of extensions or add-ons that you see possibly working better?
Borchert: I don’t think a browser extension is going to make fake news go away. I don’t think there’s a single solution that will significantly change the landscape around this. What I’m interested in right now: I think browser extensions can, for example, incorporate fact-checking that can help us with website and text classification to help flag fake news better. They can help us because they allow users to contribute data into projects about their surfing behavior. It’s all just little building blocks, little bits and pieces that are parts of a larger puzzle we need to build. Right now we only see single slices of it.
Owen: What kind of projects might you be willing to fund? Is that coming out of a dedicated pool of money or will it be on an ongoing basis?
Borchert: It’s both by project and on an ongoing basis. We’re doing several different things. For example, we’re working with Misinfocon to run a number of hackathons with newsrooms. We’re supporting a number of tracks at big conferences. We want to fund a VR/AR kind of art challenge because I think art, especially immersive art forms, can be an incredibly powerful tool to make the effects of fake news tangible and to help people become aware of how large and complex this problem is.
We also have great interest in funding more research into that space. To me, research is an incredibly important foundation to better understand human behavior around fake news, to better understand the effects certain interventions can have. Down the road, the research will inform our product decisions.
Owen: Are there any questions that you’re especially interested in seeing researched, that you haven’t seen information about so far?
Borchert: This isn’t a Mozilla perspective so much as my own interest right now: I would like to see more research that explores how people react when they are confronted with fact-checking, with the correction of things that they read and beliefs that they hold.
I grew up in a very political culture. We used to fight heavily over opinions and the right solutions to social problems, or the right ways to, to pick something more boring, do tax reform. I never would have imagined living at a time where we can’t even agree on the basic facts and what reality is. I find that deeply troubling. I think studies still conflict as to
how much fact-checking actually changes people’s deeply held beliefs. That is one interesting aspect.
And how do you design product solutions that highlight fact-checking and flag fake news in a way that is acceptable to people — that doesn’t put them off or push them deeper into a certain belief system?
There is quite a bit of research around how interacting through technology, separated by screens lowers our ability to have empathy with other people. I’m wondering how much that contributes to the increasing polarization we see in our society that is then amplified by fake news. I’m wondering if we can design product solutions that increase empathy and help us connect with others on a more human level, to overcome some of that polarization?
Last but not least, I would like to understand much, much better how big the impact of bot networks is. How much of the fake news distribution that we see is real people consuming fake news, and how much is it bot networks gaming algorithms and social media rankings? I think that we have only started scratching the surface there.
Owen: I agree that exploring people’s individual reactions to fact-checking is fascinating.
Borchert: I’d be interested in the kind of qualitative research that you get through individual interviews and focus groups, but I’m also curious about doing some in-product A/B testing. If you can attach flags to articles or to searches, or if you can attach fact-checking to certain articles, how do you present that in a way that people don’t feel like somebody is preaching at them, or wagging a finger saying, hey, your perception of reality is wrong here? How do you make it more palatable and acceptable for people to emotionally engage with reality?
Mozilla is a nonprofit, and for us this fits into the whole framework of having the internet as one of the greatest public resources of our time. We invest a lot of time and resources into what we call the health of the open web. Part of that is policy-related and part of that is product-related. There are many threats to the health of the open web, from increased centralization to regulatory issues, but fake news and the whole misinformation problem is a big threat.
I think it’s also really important to not forget that this is not a U.S.-centric problem. This is not a Trump problem. This is a big global problem that affects all societies, cultures, and countries.