Sections

Research

Is seeing still believing? The deepfake challenge to truth in politics

U.S. House Speaker Nancy Pelosi (D-CA) speaks during a news conference on the USMCA trade agreement on Capitol Hill in Washington, U.S., December 10, 2019. REUTERS/Yuri Gripas - RC2FSD940VM4
Editor's note:

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies.

On Nov. 25, an article headlined “Spot the deepfake. (It’s getting harder.)” appeared on the front page of The New York Times business section. The editors would not have placed this piece on the front page a year ago. If they had, few would have understood what its headline meant. Today, most do. This technology, one of the most worrying fruits of rapid advances in artificial intelligence (AI), allows those who wield it to create audio and video representations of real people saying and doing made-up things. As this technology develops, it becomes increasingly difficult to distinguish real audio and video recordings from fraudulent misrepresentations created by manipulating real sounds and images. “In the short term, detection will be reasonably effective,” says Subbarao Kambhampati, a professor of computer science at Arizona State University. “In the longer run, I think it will be impossible to distinguish between the real pictures and the fake pictures.”

The longer run may come as early as later this year, in time for the presidential election. In August 2019, a team of Israeli researchers announced a new technique for making deepfakes that creates realistic videos by substituting the face of one individual for another who is really speaking. Unlike previous methods, this one works on any two people without extensive, iterated focus on their faces, cutting hours or even days from previous deepfake processes without the need for expensive hardware. Because the Israeli researchers have released their model publicly—a move they justify as essential for defense against it—the proliferation of this cheap and easy deepfake technology appears inevitable.

To illustrate the challenge posed by this development, I note the warning offered by the unforgettable boudoir scene in Marx Brothers comedy classic, “Duck Soup.”

Mrs. Teasdale (the redoubtable Margaret Dumont) : Your Excellency! I thought you’d left.

Chicolini (Chico Marx disguised as Freedonia’s president) : Oh, no, I no leave.

Mrs. Teasdale : But I saw you with my own eyes!

Chicolini : Well, who you gonna believe? Me or your own eyes?

As the 2020 election looms, Chicolini has posed a question with which candidates and the American people will be forced to grapple. If AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said (and even doing things they never did), seeing will no longer be believing, and we will have to decide for ourselves—without reliable evidence—whom or what to believe. Worse, candidates will be able to dismiss accurate but embarrassing representations of what they say are fakes, an evasion that will be hard to disprove.

“If AI is reaching the point where it will be virtually impossible to detect audio and video representations of people saying things they never said …, seeing will no longer be believing.”

In 2008, Barack Obama was recorded at a small gathering saying that residents of hard-hit areas often responded by clinging to guns and religion. In 2012, Mitt Romney was recorded telling a group of funders that 47% of the population was happy to depend on the government for the basic necessities of life. And in 2016, Hillary Clinton dismissed many of Donald Trump’s supporters as a basket of deplorables. The accuracy of these recordings was undisputed. In 2020, however, campaign operatives will have technological grounds for challenging the authenticity of such revelations and competing testimony from attendees at private events could throw such disputes into confusion. Says Nick Dufour, one of Google’s leading research engineers, deepfakes “have allowed people to claim that video evidence that would otherwise be very compelling is a fake.”

Even if reliable modes of detecting deepfakes exist in the fall of 2020, they will operate more slowly than the generation of these fakes, allowing false representations to dominate the media landscape for days or even weeks. “A lie can go halfway around the world before the truth can get its shoes on,” warns David Doermann, the director of the Artificial Intelligence Institute at the University of Buffalo. And if defensive methods yield results short of certainty, as many will, technology companies will be hesitant to label the likely misrepresentations as fakes.

“The capacity to generate deepfakes is proceeding much faster than the ability to detect them.”

The capacity to generate deepfakes is proceeding much faster than the ability to detect them. In AI circles, reports The Washington Post’s Drew Harwell, identifying fake media has long received less attention, funding, and institutional support than creating it. “Why sniff out other people’s fantasy creations when you can design your own?” asks Hany Farid, a computer science professor and digital forensics expert at the University of California at Berkeley, “We are outgunned.” Farid says, “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.” As a result, the technology is improving at breakneck speed. “In January 2019, deep fakes were buggy and flickery,” Farid told The Financial Times. “Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.”

As Nasir Memon, a professor of computer science and engineering at New York University, puts it:

“As a consequence of this, even truth will not be believed. The man in front of the tank at Tiananmen Square moved the world. Nixon on the phone cost him his presidency. Images of horror from concentration camps finally moved us into action. If the notion of … believing what you see is under attack, that is a huge problem.”

Faced with this epistemological anarchy, voters will be more likely than ever before to remain within their partisan bubbles, believing only those politicians and media figures who share their political orientation. Evidence-based persuasion across partisan and ideological lines will be even more difficult than it has been in recent decades, as the media has bifurcated along partisan lines and political polarization has surged.

Legal scholars Bobby Chesney and Danielle Citron offer a comprehensive summary of the threat deep fake technologies pose to our politics and society. These realistic yet misleading depictions will be capable of distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.

Beyond domestic politics, deepfake technologies pose a threat to America’s diplomacy and national security. As Brookings researchers Chris Meserole and Alina Polyakova argue, the U.S. and its allies are “ill-prepared” for the wave of deepfakes that Russian disinformation campaigns could unleash. Chesney and Citron point out that instead of the tweets and Facebook posts that disrupted the 2016 U.S. presidential campaign, Russian disinformation in 2020 could take the form of a “fake video of a white police offer shouting racial slurs or a Black Lives Matter activists calling for violence.” A fake video depicting an Israeli official saying or doing something inflammatory could undermine American efforts to build bridges between the Jewish state and its members. A well-timed forgery could tip an election, they warn.

“A well-timed forgery could tip an election.”

This is a global problem. Already a suspected deepfake may have contributed to an attempted coup in Gabon and to an unsuccessful effort to discredit Malaysia’s economic affairs minister and drive him from office. There is evidence suggesting that the diplomatic confrontation between Saudi Arabia and Qatar may have been sparked by a fake news story featuring invented quotes by Qatar’s emir. A high-tech Russian disinformation campaign that tried to prevent the election of Emmanuel Macron as France’s president in 2017 was thwarted by a well-prepared Macron team, but might have succeeded against a less alert candidate. In Belgium, a political party created a deepfake video of President Donald Trump apparently interfering in the country’s internal affairs. “As you know,” the video falsely depicted Trump as saying, “I had the balls to withdraw from the Paris climate agreement—and so should you.” A political uproar ensued and subsided only when the party’s media team acknowledged the high-tech forgery. A deepfake depicting President Trump ordering the deployment of U.S. forces against North Korea could trigger a nuclear war.

What can we do about deepfakes?

What’s already happening

Awareness of the challenges posed by deepfake technologies is gradually spreading through the U.S. government. On June 19, 2019, the House Intelligence Committee convened a hearing at which several highly regarded AI experts offered testimony about the emerging threat. In a formal statement, the committee expressed its determination to examine “what role the public sector, the private sector, and society as a whole should play to counter a potentially grim, ‘post-truth’ future.” In his opening remarks, committee Chair Adam Schiff warned of a “nightmarish” scenario for the upcoming presidential campaigns and declared that “now is the time for social media companies to put in place policies to protect users from misinformation, not in 2021 after viral deepfakes have polluted the 2020 elections.”

On Sept. 10, 2019, the Senate Committee on Homeland Security and Government Affairs endorsed the Deepfake Report Act of 2019, which passed the full Senate as amended by unanimous consent on Oct. 24 and was referred to the House Energy and Commerce Committee on Oct. 28. This bill would require the Secretary of Homeland Security to issue an annual report on the state of what it terms “digital content forgery technology,” including advances in technology and its abuse by domestic and foreign actors.

In the executive branch, the Defense Advanced Research Projects Agency (DARPA) has spearheaded the effort to fight malicious deepfakes. Programs announced so far include Media Forensics (MediFor) and Semantic Forensics (SemaFor). The former, according to the official announcement, will bring together world-class researchers to “level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video.” Once successfully integrated into an end-to-end system, MediFor would not only detect deepfake manipulations but also provide detailed information about how they were generated.

SemaFor represents a refinement of this effort. Because detection strategies that rely on statistics can be easily fooled and foiled, SemaFor will focus on “semantic errors” such as mismatched earrings that would enable researchers to identify deepfakes that algorithms might overlook. To do this, DARPA plans to invest heavily in machines capable of simulating, but speeding up, the processes of common-sense reasoning and informal logic employed by human beings.

Turning to the private sector: In September 2019, Facebook announced a new $10 million “Deepfake Detection Challenge,” a partnership with six leading academic institutions and other companies, including Amazon and Microsoft. To jumpstart the challenge, Facebook promises to release a dataset of faces and videos from consenting individuals.

Meanwhile, Google has joined forces with DARPA to fund researchers at the University of California at Berkeley and the University of Southern California who are developing a new digital forensics technique based on individuals’ style of speech and body movement, termed a “softbiometric signature.” Although this technique has already achieved 92% accuracy in experimental conditions, this success may prove temporary. One of the researchers, Professor Hao Li, characterizes the current situation as “an arms race between digital manipulations and the ability to detect [them].” Li says, “The advancements of AI-based algorithms are catalyzing both sides.”

Anti-deepfake efforts extend beyond established companies. The face of development in this field already has generated one startup, Deeptrace, whose publications on the international uses of this technology have already created a stir, and another—Truepic—that offers clients verified data sources against which possibly fraudulent videos can be checked.

Legislative options

Chesney and Citron comprehensively survey possible legislative responses to the dangers posed by this emerging technology, and their conclusions are less than encouraging. It is unlikely that an outright ban on deepfakes would pass constitutional muster. Existing bodies of civil law, such as protections against copyright infringement and defamation, are likely to be of limited utility.

“It is unlikely that an outright ban on deepfakes would pass constitutional muster. Existing bodies of civil law, such as protections against copyright infringement and defamation, are likely to be of limited utility.”

More promising is amending Section 230 of the Communications Decency Act, which was intended to provide a safe harbor for online service providers to experiment with different ways of filtering unwanted content. Chesney and Citron argue—persuasively, in my view—that courts have expanded this provision of the law into a near-blanket immunity against civil liability, including instances in which a provider distributed content knowing that it violated the law. Citing recent legislation holding platforms responsible for knowingly facilitating sex trafficking offenses, they recommend protecting only those providers that take “reasonable steps to address unlawful uses of its services.” As they acknowledge, however, current criminal statutes do more to protect potential victims of stalking, intimidation, and incitement than to safeguard the integrity of the political process. And within current law, administrative agencies such as the Federal Trade Commission, the Federal Communications Commission, and the Federal Election Commission cannot do much either. It remains to be seen whether laws that tackle this issue head-on, such as California’s recently enacted prohibition against creating or distributing deepfakes within 60 days of an election, will survive the inevitable constitutional challenge.

It was against this backdrop that Rep. Yvette Clark (D-N.Y.) introduced the DEEPFAKES Accountability Act in June 2019. This bill would make it a crime to create and distribute a deepfake without including a digital marker of the modification and text statement acknowledging the modification. It would also give victims the right to sue the creators of these misrepresentations for damage to their reputation.

As critics point out, the broad language of the bill would make it difficult to distinguish between truly malicious deepfakes and the use of this technology for entertainment and satire, triggering First Amendment concerns. Moreover, the good guys would be more likely to add digital and verbal identifiers than would the bad actors who are trying to sow discord and swing elections. Prospects for enacting this bill do not appear promising.

Conclusion: An interim proposal

The disparate technological and legislative efforts discussed above seem unlikely to cohere into an effective counter to deepfakes in time to safeguard the integrity of the 2020 election. To accelerate the process, I would recommend the formation of a nonpartisan, anti-deepfake SWAT team made up of the best academic and private-sector researchers, with a hotline manned at all hours of the day and night. Candidates for elected office could send alleged deepfakes to this SWAT team for analysis and judgment within 24 hours. This new entity would establish relationships with as many traditional and social media outlets as possible, across the political spectrum, with individuals in each outlet designated as points of contact. These outlets would be encouraged to sign a pledge committing them to label as false and misleading every audio or video representation the team judges to be a deepfake product and—in the case of internet platforms—to remove these representations from their sites. As previous voluntary efforts have shown, the effectiveness of this proposal will depend on buy-in from both major parties and their candidates. If it works in 2020, it could be established as a permanent standing organization rather than a temporary emergency measure.

“[T]echnological and legislative efforts … seem unlikely to cohere into an effective counter to deepfakes in time to safeguard the integrity of the 2020 election.”

As Chesney and Citron point out, the content screening and removal policies of the platforms may prove to be “the most salient response mechanism of all” because their terms-of-service agreements are “the single most important documents governing digital speech in today’s world.” Internet platform providers have an opportunity to contribute—voluntarily—to a 2020 electoral process that honors norms of truth-telling more than in recent elections. If they do not avail themselves of this opportunity—and if deepfakes rampage through next year’s election, leaving a swathe of falsehoods and misrepresentations in their wake—Congress may well move to strip the platforms of the near total immunity they have enjoyed for a quarter of a century, and the courts may rethink interpretations of the First Amendment that prevent lawmakers from protecting fundamental democratic processes.

Facebook’s refusal to remove a crudely altered video of House Speaker Nancy Pelosi played poorly in many quarters, as did the explanation that the company’s rules did not prohibit posting false information. If I were Mark Zuckerberg, I wouldn’t bet the future of my company on the continuation of business as usual.


The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, and Amazon, Facebook, and Google provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Authors