Sections

Research

Fighting deepfakes when detection fails

A green wireframe model covers an actor's lower face during the creation of a synthetic facial reanimation video, known alternatively as a deepfake, in London, Britain February 12, 2019. Picture taken February 12, 2019. Reuters TV via REUTERS - RC289D9IZ78P
Editor's note:

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies.

Deepfakes intended to spread misinformation are already a threat to online discourse, and there is every reason to believe this problem will become more significant in the future. So far, most ongoing research and mitigation efforts have focused on automated deepfake detection, which will aid deepfake discovery for the next few years. However, worse than cybersecurity’s perpetual cat-and-mouse game, automated deepfake detection is likely to become impossible in the relatively near future, as the approaches that generate fake digital content improve considerably. In addition to supporting the near-term creation and responsible dissemination of deepfake detection technology, policymakers should invest in discovering and developing longer-term solutions. Policymakers should take actions that:

  • Support ongoing deepfake detection efforts with continued funding through DARPA’s MediFor program, as well as adding new grants to support collaboration between detection efforts and training journalists and fact-checkers to use these tools.
  • Create an additional stream of funding awards for the development of new tools, such as reverse video search or blockchain-based verification systems, that may better persist in the face of undetectable deepfakes.
  • Encourage the release of large social media datasets for social science researchers to study and explore solutions to viral misinformation and disinformation campaigns.

Deepfakes and disinformation

Deepfakes are audio, images, and videos that appear to realistically depict speech and actions, but are actually synthetic representations made using modern artificial intelligence. Although not exclusively applicable to faces, the use of this technology to manipulate facial expressions and speech, or face-swap an individual into a video, has garnered the greatest concern. Deepfakes of prominent figures like Barack Obama, Donald Trump, and Mark Zuckerberg have made national news, while female journalists and actresses have become the first direct victims after being unwillingly cast in pornographic videos.

Carefully made deepfakes can already be very realistic, though only under certain circumstances—an attentive observer will notice that convincing deepfakes focus on individuals who don’t wear glasses or have beards, and typically use a stationary camera. Still, the pace of development has been very fast: It was only last year in which the models were struggling to accurately generate teeth and deepfaked individuals failed to consistently blink.

“Carefully made deepfakes can already be very realistic, though only under certain circumstances—an attentive observer will notice that convincing deepfakes focus on individuals who don’t wear glasses or have beards, and typically use a stationary camera.”

Deepfakes pose a significant problem for public knowledge. Their development is not a watershed moment—altered images, audio, and video have pervaded the internet for a long time—but they will significantly contribute to the continued erosion of faith in digital content. The best artificial intelligence tools are open for anyone to use, and many deepfake-specific technologies are freely available. This means that creating convincing fake content is easier than ever before and will become more accessible for the foreseeable future.

It is certainly possible that a single convincing, spectacular, and effectively timed deepfake video could temporarily crash the stock market, cause a riot, or throw an election. However, these are not the most likely scenarios. More likely, a large number of deepfakes uploaded by amateurs (either with satirical or political aims) and influence campaigns (of foreign origin or those driven by advertising monetization) will slowly disperse through the internet, further clouding the authenticity of the digital world.

The effects will be threefold:

  • Disinformation: People are more likely to have a visceral reaction to disinformation in the form of fake image, audio, and video content, which enables the altered media to spread more quickly than purely textual fake information. Further, images and video have been suggested to trigger a Mandela effect, the creation of memories that never happened.
  • Exhaustion of critical thinking: It will take more effort for individuals to ascertain whether information is true, especially when it does not come from trusted actors (e.g. ProPublica or The New York Times). Uncertainty around content veracity might also dissuade an individual from sharing accurate content, reducing the distribution of accurate information.
  • The liar’s dividend: The existence of fully synthetic content offers an avenue for actors to deflect accusations of impropriety based on recordings and video, by claiming the source material has been faked.

These outcomes are troubling and will be most pervasive in the immediate future, as deepfake quality increases and societal awareness lags.

Automated deepfake detection

Most ongoing research aimed at combating the influence of deepfakes has focused on automated deepfake detection: using algorithms to discern if a specific image, audio clip, or video has been substantially modified from an original. A range of papers have discovered telltale signs of deepfakes, including unnatural blinking patterns, distortion in facial features, inconsistencies across images within a video (especially concerning lighting), incongruities between the speech and mouth movements of the subject, and even learning to note the absence of biometric patterns specific to world leaders. However, these detection methods are likely to be short-lived.

The blinking example is especially informative. When researchers at the University of Albany noted that deepfakes could be detected based on irregular blinking patterns, it took only a few months for new deepfakes to emerge that had corrected for this particular imperfection. Further, even highly successful deepfake detection methods are difficult to scale. Identifying 90% of deepfakes may sound excellent, but keep in mind that these materials can be mass-produced and mass-distributed by armies of bots and trolls, as is the case with most disinformation campaigns.

Researchers focused on deepfake detection are largely using clever applications of artificial intelligence to uncover the deceptive images. In fact, the detection methods are similar enough to the methods used to make deepfakes that the research on detection inadvertently provides a roadmap for improving the fakes. Worse, the deepfake detection models themselves can be used directly in the deepfake generation process to improve their output.

What’s that A-GAN?

A basic understanding of how deepfakes are created has direct implications for their societal consequences and governance. One of the underlying technologies enabling deepfake creation is a specific subset of artificial intelligence known as generative adversarial networks, or GANs.

Currently, the most important subfield of artificial intelligence is focused on neural networks—a specific type of algorithm that learns in layers to build complex concepts out of less complex ones. GANs use two neural networks: one to generate fake images, and a second to then evaluate those images (think of videos as just a series of images). The word “adversarial” is used because the first neural network (the generator) is attempting to make images that fool the second neural network into thinking they are real. The second neural network (the discriminator) is asked to discern if the generated images, mixed in with real images from previously existing data, are real or fake. The generator and discriminator alternate, with the generator aiming to improve in each cycle until it can usually fool the discriminator.

Note again the definition of a discriminator: a neural network that discerns real versus fake content. That is exactly what the deepfake detecting neural networks are intended to do, and as it turns out, they are exchangeable for this purpose. So, if it is possible to build a deepfake detecting algorithm, it is possible to use that detector in the GAN, essentially designing a new generation of deepfakes specialized in being convincing to that detector. This dilemma poses a real problem for the modern field of artificial intelligence, which is dogmatically open-source—meaning its papers, data, and models are rapidly and widely dispersed across the field, available to all.

A cat-and-cat game

Recently, deepfake detection has been compared to cybersecurity, in which there has been a long-standing cat-and-mouse game, with perpetually improving security countered by perpetually novel cyberattacks. This analogy is overly optimistic.

“Deepfakes, however, can be literally perfect: There is an attainable point in which deepfakes can be entirely indistinguishable from authentic content.”

Cybersecurity does seem to experience a constant seesawing of defensive and offensive advantage—new standards and tools improve security, but then new interactions and systems open new opportunity for attack. Deepfakes, however, can be literally perfect: There is an attainable point in which deepfakes can be entirely indistinguishable from authentic content. Of course, there will be constraints on videos of this quality, such as eschewing beards, compressing the video data, and using only single cameras facing single individuals. But for the best deepfakes, it will be a simple cat-and-cat game—with nothing that automated detection can do.

I will venture to guess that this deepfake supremacy will come within the next decade, if not just sooner. If this seems unrealistically fast, note that GANs were only invented in 2014, and the incredible pace of artificial intelligence development shows no sign of slowing, as it has many systemic drivers, including: massive private investment, immense academic interest, billowing cloud-computing resources, proliferating datasets, and the aforementioned open-source nature of the field.

Deepfake detection in 2020

Deepfake detection will be a viable method for social media companies to reduce disinformation in 2020, which is important not only because of the presidential election, but also because awareness of realistic deepfakes will still be quite low. In addition, while some deepfakes will reach ultra-realism in the coming few years, many will still be more amateurish and thus easily detectable with the best detection methods.

There has been substantial activity around deepfakes in recent weeks: Google released a new dataset of real and deepfaked videos, while Facebook, Microsoft, Amazon, and the Partnership on AI are developing a new dataset of videos with an associated competition (and prize funding) to develop better detection methods. Despite being aware of the short-term nature of detection-based solutions, major content-distributing tech companies continue to invest heavily in deepfake detection algorithms. There is little choice as the 2020 elections loom, and inaction will be extensively scrutinized and criticized.

In the federal government, DARPA’s Media Forensics program is awarding grants to fund some of the aforementioned deepfake research and it’s worth expanding that funding, as is proposed in the current defense reauthorization bill that passed the House of Representatives. Similarly, in the current Intelligence Authorization Act, which has also passed the House and awaits Senate consideration, there is $5,000,000 in funding for a competition run by IARPA to develop new detection tools.

This would be money well-spent, though coordination and dissemination efforts could also use financial support. Because deepfake detection models can be used as discriminators in the deepfake generation process, these new models should no longer simply be made open-source. While leading researchers seem to have realized this, they are not necessarily equipped to decide which nonprofits and newspapers should have access to their models. Further, journalists and fact-checkers may not have the expertise to effectively use these models without training: A program to support an intermediary to securely distribute the models and educate about their use would add as much value as additional detection models.

As a worthwhile aside, this is not the last time that challenges will warrant funding or other rapid and informed action around AI. The ability to contribute to societal problems like these (see algorithmic bias or backdoors in neural networks) is a factor that the National Science Foundation (NSF) should take into consideration when awarding the recently announced $120 million for National AI Research Institutes.

Beyond automated deepfake detection

Investing in new technologies

There are other steps that can be taken to reduce the impact of deepfakes, even once they are indistinguishable from real video. Reverse image search has empowered journalists, fact-checkers, and everyday netizens to unearth original photos from which forgeries are made. This type of tool allows users to upload an image, then uses computer vision to discover similar photos online, which can reveal the photo as altered or presented outside its original context. For instance, this was the case for many photos that circulated online during Hurricane Sandy.

Reverse video search, however, does not currently exist in a publicly available way. This means that the rapid proliferation of deepfake videos is leaving honest brokers behind, with an outdated set of tools for the job at hand. A financial investment in developing and publicly releasing this technology would enable the discovery of any deepfakes based on publicly available footage (as most have been so far) and could thus be very impactful. This line of funding could come through DARPA’s existing program or the new NSF-funded National AI Labs.

There are other technical approaches worth keeping an eye on. A small number of companies are selling blockchain-based verification, in which content can be registered to an unalterable ledger at the time of creation. This may be valuable for actors who have a significant stake in proving that their content is the original, should a discrepancy arise. While it’s possible the newswires and others may adopt these systems, it is unlikely to ever affect the vast majority of content created by normal citizens.

Supporting social science

The massive datasets held by social media companies could potentially speak volumes to the problem of deepfakes, as well as to disinformation more broadly. Twitter has released data containing millions of tweets from state-sponsored propaganda campaigns which was used to learn about the different types of online trolls. Facebook is in the middle of a process to release very large troves of election-relevant data—though there are some delays and concerns—through their Social Science One initiative.

“The massive datasets held by social media companies could potentially speak volumes to the problem of deepfakes, as well as to disinformation more broadly.”

After an experimental trial period, WhatsApp recently decided to reduce the maximum number of recipients on messages to five (down from 256) to slow the spread of misinformation. This sounds like a positive step, but independent research showed it is only partially effective, and fake content was still able to be quickly and widely spread throughout WhatsApp’s groups. Since WhatsApp does not make its data available for independent analysis, these researchers had to go through an arduous data collection process and were only able to collect some of the most pertinent data. To help discover new solutions to combat the spread of disinformation, policymakers should encourage technology companies to broadly disseminate their data to social scientists through responsible and privacy-preserving platforms.

CDA reform and deepfake bans

While limited in their influence, legal remedies have a role to play as well. A growing number of scholars are calling for qualifications on Section 230 of The Communications Decency Act. This would make it easier for private citizens to hold technology platforms accountable for disseminating harmful or slanderous content uploaded by its users. These qualifications would give individuals meaningful leverage to petition technology companies to remove deepfaked content of their likeness from these websites. Danielle Citron and Bobby Chesney summarize this position: “Section 230’s immunity provision has been stretched considerably since its enactment, immunizing platforms even when they solicit or knowingly host illegal or tortious activity.” Of course, reform of Section 230, oft-called the First Amendment of the internet, remains deeply controversial.

As a more direct approach, California just banned the creation and circulation of deepfakes of politicians within 60 days of an election. Whether this legislation is enforceable, effective, or legal is yet to be seen, but this effort is worth watching. There are other pertinent laws already in existence (e.g. copyright, the right of publicity, and tort law), but most depend on holding the deepfake creator responsible, which can be difficult due to online anonymity and legal jurisdiction.

A recent report from the RAND Corporation described the broad pattern of societal growing misinformation as “truth decay”— and in keeping with this figure of speech, it is clear that deepfake detection has a half-life. While there is no single solution in new technology, research, or law, it is certainly time to start looking to these options with greater urgency and funding.


The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, and Amazon, Facebook, and Google provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.