Nieman Foundation at Harvard
HOME
          
LATEST STORY
The media becomes an activist for democracy
ABOUT                    SUBSCRIBE
Nov. 20, 2019, 8:50 a.m.

What should newsrooms do about deepfakes? These three things, for starters

Three researchers argue the dangers of deepfakes are overblown, but they will still require journalists to give thought to how they handle unconfirmed information.

Headlines from the likes of The New York Times (“Deepfakes Are Coming. We Can No Longer Believe What We See“), The Wall Street Journal (“Deepfake Videos Are Getting Real and That’s a Problem“), and The Washington Post (“Top AI researchers race to detect ‘deepfake’ videos: ‘We are outgunned’“) would have us believe that clever fakes may soon make it impossible to distinguish truth from falsehood. Deepfakes — pieces of AI-synthesized image and video content persuasively depicting things that never happened — are now a constant presence in conversations about the future of disinformation.

These concerns have been kicked into even higher gear by the swiftly approaching 2020 U.S. election. A video essay from The Atlantic admonishes us: “Ahead of 2020, Beware the Deepfake.” An article from The Institute for Policy Studies asks: “Will a ‘Deepfake’ Swing the 2020 Election?” The numerous offerings in this genre ask us to consider a future in which voters are subject to an unrelenting stream of fabricated video content indistinguishable from reality.

The articles above — and many others like them — express grave concerns about deepfakes, largely without hazarding solutions. The threat of deepfakes to sow disinformation is real — but it is broadly overstated, and it can be mitigated by interventions centered on newsroom practices.

While deepfakes might be novel in form, there’s good reason to be skeptical of their capacity to radically transform public discourse and opinion-forming. In part that’s because propagandists are pragmatists, and low-barrier-to-entry mediums like text and crude photoshops might serve their purposes just as well. Many of those who might be taken in by outright conspiracy theories are not persuaded by the theories’ persuasiveness so much as by motivated reasoning: They’d like to believe them to be true. When threadbare theories have currency, it’s not clear that novel tools to make them appear objectively more solid would bring that many more people along.

The impact of deepfakes may be further blunted by rapidly improving detection capabilities, and by growing public awareness around the technology, courtesy of the sort of press coverage referenced above.

It’s important to recognize that the most recent round of reporting reflects decades of mounting consciousness and concern. A Wall Street Journal article from 1999 titled “When Seeing and Hearing Isn’t Believing” warned of the persuasive power of audio-visual “morphing” technology:

Video and photo manipulation has already raised profound questions of authenticity for the journalistic world. With audio joining the mix, it is not only journalists but also privacy advocates and the conspiracy-minded who will no doubt ponder the worrisome mischief that lurks in the not too distant future.

In the years since, decontextualized or outright fabricated content has been a staple of social media newsfeeds, breeding a degree of native skepticism and distrust that will likely go some distance in heading off deepfake-driven sea changes in public opinion.

This is not to say that people won’t be harmed by deepfakes. We’re likely to see more malicious machine learning applications like DeepNude, an application which uses deepfake-style technology to synthesize pornographic images from pictures of clothed women (and only women). And those who create and spread such content should be held accountable for their harassment of their targets and the environment that more broadly creates — a distinct problem from that of the spread of disinformation. For the latter problem, an overly strong focus on the technology tends to lead us away from thinking about how and why people believe and spread false narratives, particularly in the political domain.

This skepticism has begun to find a very visible place in the rapidly emerging dialogue around deepfakes. Last June, Joan Donovan and Britt Paris argued in Slate that low-tech “cheapfakes” can be just as damaging as deepfakes given an audience that is ready to believe, pointing to a widely circulated, primitively doctored video of Nancy Pelosi as their keystone example. In August, Claire Wardle — who leads First Draft, a nonprofit founded to fight disinformation — took to The New York Times with a video (part of which was itself deepfaked) making the case that the probable impacts of deepfakes are likely overhyped. It concludes with an entreaty to the public: “And if you don’t know — 100 percent, hand-on-heart — this is true, please don’t share, because it’s not worth the risk.”

Wardle’s admonition reflects an important effect that deepfakes might have on our media ecosystem, even if they’re not capable of distorting public opinion to a novel magnitude. As deepfakes become an everyday part of our information ecosystem, they might hinder the ability of newsrooms to respond quickly and effectively in countering false information.

Many newsrooms — particularly those that follow Wardle’s standard of 100 percent certainty — tend to take a precautionary principle. In cases where the veracity of a piece of information or content cannot be confirmed, their tendency is not to report on it at all until such confirmation materializes. As the Code of Ethics for the Society of Professional Journalists exhorts: “Verify information before releasing it…Remember that neither speed nor format excuses inaccuracy.” This already poses a challenge in dealing with text and images in a fast-moving, virality-driven news ecosystem, and technically sophisticated deepfakes may substantially raise the costs of doing conclusive forensics in the newsroom environment. Since calling a deepfake a deepfake will, in many cases, be construed as a highly political act, newsrooms will want to take the time to get their forensics right.

This laudable principle has some uncomfortable consequences when it collides with today’s media environment. In the critical first hours after their release, deepfakes may circulate through the web with comparatively little attenuation from established newsrooms and their fact-checkers. The sites that do swiftly weigh in on the veracity of a deepfake (or spread it without weighing in at all) will likely be those with less rigorous standards, promulgating half-formed assessments in the hopes of being among the first to report. In the process, they may wrest control of the narrative around the deepfake away from those best equipped to manage it responsibly.

And on the other side of the coin, this additional forensic burden will also slow the pace at which fastidious newsrooms report on content known to be definitely real or definitely fake. Allegations of various shades of deepfakery are already materializing as a tactic for sowing uncertainty around various forms of scandal-inducing evidence. One can imagine a publication releasing footage leaked to it by a White House staffer, only to have the White House accuse that publication of having passed on an excellent deepfake, kicking off a politically-charged forensic back-and-forth unlikely to result in a widely accepted consensus. Such episodes will sap the credibility of reliable newsrooms, and may lessen whistleblowers’ willingness to come forward in the first place.

These dynamics will be important regardless of whether or not deepfakes are truly revolutionary in their ability to mislead otherwise canny audiences. By making it more expensive for newsrooms to do good forensics work at the breakneck pace of the news cycle — and opening the door for those less principled — deepfakes might slip through a loophole in journalistic ethics. Even if their persuasive power doesn’t far outstrip that of conventional formats for disinformation, the difficulty of quickly and conclusively debunking deepfakes (or verifying legitimate content) may bog down the traditional media institutions that many of us still appropriately rely on as counterweights against viralized disinformation.

And rely on them we do. Early visions of the digital public sphere held that the wisdom of the crowds would generally be sufficient to filter truth from fiction — that organic patterns of dissemination would naturally promote the verifiable and demote the unsubstantiated. Reality has defied this notion of a highly efficient marketplace of information and ideas, especially when that marketplace is populated with propagandists and bots hard to distinguish from everyone else. Newsrooms, fact-checkers, and the codes of journalistic ethics that ground them help us navigate this morass, leveraging their expertise and resources to provide the careful forensic work which consumers generally lack the time, skill, and inclination to undertake themselves. Jeopardizing their efficacy could be a painful blow to the health of our media ecosystem.

So, how do we navigate this tricky dilemma, one which balances the important values of journalistic ethics against the need to ensure that newsrooms can continue to fulfill their role as a critical societal counterweight to disinformation?

First, newsrooms must accelerate their ability to come to accurate conclusions about synthetic or tampered media. This will require that they expand their technical repertoire to confront emerging techniques of media manipulation. Stronger collaborations with the research community working on deepfake detection methodologies and journalists confronting this content “in the wild” would be a good first start. By giving newsrooms access to the latest tools for sussing out the veracity of content, we might place them in a better position to keep pace with the flow of disinformation. An additional option might be to create a “batphone” system, staffed by expert researchers, to which newsrooms could submit suspicious content for verification or debunking in near real time.

Second, newsrooms might, in the immediate term, favor reporting on their process rather than waiting for a conclusive outcome. That is, they might provide ongoing public documentation of their own journey of investigation around verifying a piece of media — the methodologies they bring to bear and the ethical challenges around rendering a decision. Just as many outlets liveblog major political and sporting events, so too might they give running accounts of their deepfake investigations in real time, providing leverage on the breaking news cycle and transparency into forensic decision-making. This might be particularly necessary in the case of high-profile, elaborate fakes where maintaining a steadfast silence would allow less scrupulous voices to fill the room — but where reaching a conclusion may take hours or days (if not weeks or months) of careful work.

Third, contextual clues around media remain a critical part of assessing veracity and the agenda of those who might seek to spread disinformation. How did a piece of content appear online? Did that content appear as part of a coordinated campaign to spread the content? When and where were the events depicted by the media purported to have taken place? Is there other media corroborating the video or image in question? In many cases, social media platforms hold the keys to this data. Greater transparency and collaborations between platforms and newsrooms could play a major role in improving the ability for journalists to investigate and verify media.

It seems unlikely that deepfakes will fundamentally — and ruinously — reconfigure the public’s relationship with evidence. But whether or not opinion columnists’ direst predictions come to fruition, it seems almost inevitable that newsrooms operating in a world of deepfakes will be forced to shoulder a heavy new burden. Newsrooms will need to develop new partners and procedures, lest the search for proof-positive authenticity become paralytic. The fundamentally multilateral nature of this process of innovation means that they won’t be able to do it alone. Propelling journalistic integrity and practice forward through the swampy terrain of deepfakes will require that researchers, platforms, and members of the public all pull an oar.

John Bowers is a researcher at the Berkman Klein Center for Internet and Society at Harvard University.

Tim Hwang is former director of the Harvard-MIT Ethics and Governance of AI Initiative.

Jonathan Zittrain holds professorships at Harvard Law School, the Harvard Kennedy School of Government, and the Harvard School of Engineering and Applied Sciences. He is also faculty director of the Berkman Klein Center.

Image taken from a deepfake casting Vladimir Putin in “House of Cards” by DeepFakescovery.

POSTED     Nov. 20, 2019, 8:50 a.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
The media becomes an activist for democracy
“We cannot be neutral about this, by definition. A free press that doesn’t agitate for democracy is an oxymoron.”
Embracing influencers as allies
“News organizations will increasingly rely on digital creators not just as amplifiers but as integral partners in storytelling.”
Action over analysis
“We’ve overindexed on problem articulation, to the point of problem admiring. The risk is that we are analyzing ourselves into inaction and irrelevance.”