This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies.
Deepfakes intended to spread misinformation are already a threat to online discourse, and there is every reason to believe this problem will become more significant in the future. So far, most ongoing research and mitigation efforts have focused on automated deepfake detection, which will aid deepfake discovery for the next few years. However, worse than cybersecurity’s perpetual cat-and-mouse game, automated deepfake detection is likely to become impossible in the relatively near future, as the approaches that generate fake digital content improve considerably. In addition to supporting the near-term creation and responsible dissemination of deepfake detection technology, policymakers should invest in discovering and developing longer-term solutions. Policymakers should take actions that:
- Support ongoing deepfake detection efforts with continued funding through DARPA’s MediFor program, as well as adding new grants to support collaboration between detection efforts and training journalists and fact-checkers to use these tools.
- Create an additional stream of funding awards for the development of new tools, such as reverse video search or blockchain-based verification systems, that may better persist in the face of undetectable deepfakes.
- Encourage the release of large social media datasets for social science researchers to study and explore solutions to viral misinformation and disinformation campaigns.
Deepfakes and disinformation
Deepfakes are audio, images, and videos that appear to realistically depict speech and actions, but are actually synthetic representations made using modern artificial intelligence. Although not exclusively applicable to faces, the use of this technology to manipulate facial expressions and speech, or face-swap an individual into a video, has garnered the greatest concern. Deepfakes of prominent figures like Barack Obama, Donald Trump, and Mark Zuckerberg have made national news, while female journalists and actresses have become the first direct victims after being unwillingly cast in pornographic videos.
Carefully made deepfakes can already be very realistic, though only under certain circumstances—an attentive observer will notice that convincing deepfakes focus on individuals who don’t wear glasses or have beards, and typically use a stationary camera.1 Still, the pace of development has been very fast: It was only last year in which the models were struggling to accurately generate teeth2 and deepfaked individuals failed to consistently blink.3
“Carefully made deepfakes can already be very realistic, though only under certain circumstances—an attentive observer will notice that convincing deepfakes focus on individuals who don’t wear glasses or have beards, and typically use a stationary camera.”
Deepfakes pose a significant problem for public knowledge. Their development is not a watershed moment—altered images, audio, and video have pervaded the internet for a long time4—but they will significantly contribute to the continued erosion of faith in digital content. The best artificial intelligence tools are open for anyone to use, and many deepfake-specific technologies are freely available. This means that creating convincing fake content is easier than ever before and will become more accessible for the foreseeable future.
It is certainly possible that a single convincing, spectacular, and effectively timed deepfake video could temporarily crash the stock market, cause a riot, or throw an election. However, these are not the most likely scenarios. More likely, a large number of deepfakes uploaded by amateurs (either with satirical or political aims) and influence campaigns (of foreign origin5 or those driven by advertising monetization6) will slowly disperse through the internet, further clouding the authenticity of the digital world.
The effects will be threefold:
- Disinformation: People are more likely to have a visceral reaction to disinformation in the form of fake image, audio, and video content, which enables the altered media to spread more quickly than purely textual fake information. Further, images and video have been suggested to trigger a Mandela effect,7 the creation of memories that never happened.
- Exhaustion of critical thinking: It will take more effort for individuals to ascertain whether information is true, especially when it does not come from trusted actors (e.g. ProPublica or The New York Times). Uncertainty around content veracity might also dissuade an individual from sharing accurate content, reducing the distribution of accurate information.
- The liar’s dividend: The existence of fully synthetic content offers an avenue for actors to deflect accusations of impropriety based on recordings and video, by claiming the source material has been faked.8
These outcomes are troubling and will be most pervasive in the immediate future, as deepfake quality increases and societal awareness lags.
Automated deepfake detection
Most ongoing research aimed at combating the influence of deepfakes has focused on automated deepfake detection: using algorithms to discern if a specific image, audio clip, or video has been substantially modified from an original. A range of papers have discovered telltale signs of deepfakes, including unnatural blinking patterns, distortion in facial features,9 inconsistencies across images within a video (especially concerning lighting),10 incongruities between the speech and mouth movements of the subject,11 and even learning to note the absence of biometric patterns specific to world leaders.12 However, these detection methods are likely to be short-lived.
The blinking example is especially informative. When researchers at the University of Albany noted that deepfakes could be detected based on irregular blinking patterns, it took only a few months for new deepfakes to emerge that had corrected for this particular imperfection.13 Further, even highly successful deepfake detection methods are difficult to scale. Identifying 90% of deepfakes may sound excellent, but keep in mind that these materials can be mass-produced and mass-distributed by armies of bots and trolls, as is the case with most disinformation campaigns.
Researchers focused on deepfake detection are largely using clever applications of artificial intelligence to uncover the deceptive images. In fact, the detection methods are similar enough to the methods used to make deepfakes that the research on detection inadvertently provides a roadmap for improving the fakes. Worse, the deepfake detection models themselves can be used directly in the deepfake generation process to improve their output.
What’s that A-GAN?
A basic understanding of how deepfakes are created has direct implications for their societal consequences and governance. One of the underlying technologies enabling deepfake creation is a specific subset of artificial intelligence known as generative adversarial networks, or GANs.
Currently, the most important subfield of artificial intelligence is focused on neural networks—a specific type of algorithm that learns in layers to build complex concepts out of less complex ones. GANs use two neural networks: one to generate fake images, and a second to then evaluate those images (think of videos as just a series of images). The word “adversarial” is used because the first neural network (the generator) is attempting to make images that fool the second neural network into thinking they are real. The second neural network (the discriminator) is asked to discern if the generated images, mixed in with real images from previously existing data, are real or fake. The generator and discriminator alternate, with the generator aiming to improve in each cycle until it can usually fool the discriminator.
Note again the definition of a discriminator: a neural network that discerns real versus fake content. That is exactly what the deepfake detecting neural networks are intended to do, and as it turns out, they are exchangeable for this purpose. So, if it is possible to build a deepfake detecting algorithm, it is possible to use that detector in the GAN, essentially designing a new generation of deepfakes specialized in being convincing to that detector. This dilemma poses a real problem for the modern field of artificial intelligence, which is dogmatically open-source—meaning its papers, data, and models are rapidly and widely dispersed across the field, available to all.
A cat-and-cat game
Recently, deepfake detection has been compared to cybersecurity, in which there has been a long-standing cat-and-mouse game, with perpetually improving security countered by perpetually novel cyberattacks. This analogy is overly optimistic.
“Deepfakes, however, can be literally perfect: There is an attainable point in which deepfakes can be entirely indistinguishable from authentic content.”
Cybersecurity does seem to experience a constant seesawing of defensive and offensive advantage—new standards and tools improve security, but then new interactions and systems open new opportunity for attack. Deepfakes, however, can be literally perfect: There is an attainable point in which deepfakes can be entirely indistinguishable from authentic content. Of course, there will be constraints on videos of this quality, such as eschewing beards, compressing the video data, and using only single cameras facing single individuals. But for the best deepfakes, it will be a simple cat-and-cat game—with nothing that automated detection can do.
I will venture to guess that this deepfake supremacy will come within the next decade, if not just sooner. If this seems unrealistically fast, note that GANs were only invented in 2014,14 and the incredible pace of artificial intelligence development shows no sign of slowing, as it has many systemic drivers, including: massive private investment, immense academic interest, billowing cloud-computing resources, proliferating datasets, and the aforementioned open-source nature of the field.
Deepfake detection in 2020
Deepfake detection will be a viable method for social media companies to reduce disinformation in 2020, which is important not only because of the presidential election, but also because awareness of realistic deepfakes will still be quite low. In addition, while some deepfakes will reach ultra-realism in the coming few years, many will still be more amateurish and thus easily detectable with the best detection methods.
There has been substantial activity around deepfakes in recent weeks: Google released a new dataset of real and deepfaked videos,15 while Facebook, Microsoft, Amazon, and the Partnership on AI are developing a new dataset of videos with an associated competition (and prize funding) to develop better detection methods. Despite being aware of the short-term nature of detection-based solutions, major content-distributing tech companies continue to invest heavily in deepfake detection algorithms. There is little choice as the 2020 elections loom, and inaction will be extensively scrutinized and criticized.
In the federal government, DARPA’s Media Forensics program is awarding grants to fund some of the aforementioned deepfake research and it’s worth expanding that funding, as is proposed in the current defense reauthorization bill that passed the House of Representatives.16 Similarly, in the current Intelligence Authorization Act, which has also passed the House and awaits Senate consideration, there is $5,000,000 in funding for a competition run by IARPA to develop new detection tools.17
This would be money well-spent, though coordination and dissemination efforts could also use financial support. Because deepfake detection models can be used as discriminators in the deepfake generation process, these new models should no longer simply be made open-source. While leading researchers seem to have realized this, they are not necessarily equipped to decide which nonprofits and newspapers should have access to their models. Further, journalists and fact-checkers may not have the expertise to effectively use these models without training: A program to support an intermediary to securely distribute the models and educate about their use would add as much value as additional detection models.
As a worthwhile aside, this is not the last time that challenges will warrant funding or other rapid and informed action around AI. The ability to contribute to societal problems like these (see algorithmic bias or backdoors in neural networks) is a factor that the National Science Foundation (NSF) should take into consideration when awarding the recently announced $120 million for National AI Research Institutes.18
Beyond automated deepfake detection
Investing in new technologies
There are other steps that can be taken to reduce the impact of deepfakes, even once they are indistinguishable from real video. Reverse image search has empowered journalists, fact-checkers, and everyday netizens to unearth original photos from which forgeries are made. This type of tool allows users to upload an image, then uses computer vision to discover similar photos online, which can reveal the photo as altered or presented outside its original context. For instance, this was the case for many photos that circulated online during Hurricane Sandy.19
Reverse video search, however, does not currently exist in a publicly available way. This means that the rapid proliferation of deepfake videos is leaving honest brokers behind, with an outdated set of tools for the job at hand. A financial investment in developing and publicly releasing this technology would enable the discovery of any deepfakes based on publicly available footage (as most have been so far) and could thus be very impactful. This line of funding could come through DARPA’s existing program or the new NSF-funded National AI Labs.
There are other technical approaches worth keeping an eye on. A small number of companies are selling blockchain-based verification, in which content can be registered to an unalterable ledger at the time of creation. This may be valuable for actors who have a significant stake in proving that their content is the original, should a discrepancy arise. While it’s possible the newswires and others may adopt these systems, it is unlikely to ever affect the vast majority of content created by normal citizens.
Supporting social science
The massive datasets held by social media companies could potentially speak volumes to the problem of deepfakes, as well as to disinformation more broadly. Twitter has released data containing millions of tweets from state-sponsored propaganda campaigns20 which was used to learn about the different types of online trolls.21 Facebook is in the middle of a process to release very large troves of election-relevant data—though there are some delays and concerns22—through their Social Science One initiative.
“The massive datasets held by social media companies could potentially speak volumes to the problem of deepfakes, as well as to disinformation more broadly.”
After an experimental trial period, WhatsApp recently decided to reduce the maximum number of recipients on messages to five (down from 256) to slow the spread of misinformation.23 This sounds like a positive step, but independent research showed it is only partially effective, and fake content was still able to be quickly and widely spread throughout WhatsApp’s groups.24 Since WhatsApp does not make its data available for independent analysis, these researchers had to go through an arduous data collection process and were only able to collect some of the most pertinent data.25 To help discover new solutions to combat the spread of disinformation, policymakers should encourage technology companies to broadly disseminate their data to social scientists through responsible and privacy-preserving platforms.
CDA reform and deepfake bans
While limited in their influence, legal remedies have a role to play as well. A growing number of scholars are calling for qualifications on Section 230 of The Communications Decency Act. This would make it easier for private citizens to hold technology platforms accountable for disseminating harmful or slanderous content uploaded by its users. These qualifications would give individuals meaningful leverage to petition technology companies to remove deepfaked content of their likeness from these websites. Danielle Citron and Bobby Chesney summarize this position: “Section 230’s immunity provision has been stretched considerably since its enactment, immunizing platforms even when they solicit or knowingly host illegal or tortious activity.”26 Of course, reform of Section 230, oft-called the First Amendment of the internet, remains deeply controversial.
As a more direct approach, California just banned the creation and circulation of deepfakes of politicians within 60 days of an election.27 Whether this legislation is enforceable, effective, or legal28 is yet to be seen, but this effort is worth watching. There are other pertinent laws already in existence (e.g. copyright, the right of publicity, and tort law), but most depend on holding the deepfake creator responsible, which can be difficult due to online anonymity and legal jurisdiction.
A recent report from the RAND Corporation described the broad pattern of societal growing misinformation as “truth decay”29— and in keeping with this figure of speech, it is clear that deepfake detection has a half-life. While there is no single solution in new technology, research, or law, it is certainly time to start looking to these options with greater urgency and funding.
The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.
Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, and Amazon, Facebook, and Google provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.
-
Footnotes
- Zhengqi Li, Tali Dekel, Forrester Cole, Richard Tucker, Noah Snavely, Ce Liu, William T. Freeman of Google Research, “Learning the Depths of Moving People by Watching Frozen People,” arXiv:1904.11111 [cs.CV], April 2019. https://arxiv.org/pdf/1904.11111.pdf
- Jonathan Hui, “How deep learning fakes videos (Deepfake) and how to detect it?” Medium, https://medium.com/@jonathan_hui/how-deep-learning-fakes-videos-deepfakes-and-how-to-detect-it-c0b50fbf7cb9
- Yuezun Li, Ming-Ching Chang and Siwei Lyu, “In Ictu Oculi: Exposing AI Created Fake Videos by Detecting Eye Blinking” 2018. 1-7. 10.1109/WIFS.2018.8630787. http://www.cs.albany.edu/~lsw/papers/wifs18.pdf
- Britt Paris and Joan Donovan, “Deepfakes and Cheapfakes: The Manipulation of Audio and Visual Evidence,” Data and Society. https://datasociety.net/wp-content/uploads/2019/09/DS_Deepfakes_Cheap_FakesFinal.pdf
- The Oxford Internet Institute’s Computational Propaganda Research Project has documented “foreign influence operations in seven countries: China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela.” https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf
- Samantha Bradshaw and Philip N. Howard, “The Global Disinformation Order 2019 Global Inventory of Organised Social Media Manipulation,” Computational Propaganda Research Project, https://disinformationindex.org/wp-content/uploads/2019/05/GDI_Report_Screen_AW2.pdf
- Brian Resnick, “We’re underestimating the mind-warping potential of fake video” Vox, July 2018. https://www.vox.com/science-and-health/2018/4/20/17109764/deepfake-ai-false-memory-psychology-mandela-effect
- Robert Chesney and Danielle Keats Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security” 2019. 107 California Law Review. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954
- Yuezun Li and Siwei Lyu, “Exposing DeepFake Videos By Detecting Face Warping Artifacts” arXiv:1811.00656 [cs.CV] November 2018. https://arxiv.org/pdf/1811.00656.pdf
- David Guera and Edward J. Delp, “Deepfake Video Detection Using Recurrent Neural Networks” IEEE, 10.1109/AVSS.2018.8639163 November 2018. https://engineering.purdue.edu/~dgueraco/content/deepfake.pdf
- Robert Bolles, J. Brian Burns, Martin Graciarena, Andreas Kathol, Aaron Lawson, Mitchell McLaren, Thomas Mensink. “Spotting Audio-Visual Inconsistencies (SAVI) in Manipulated Video” IEEE, 10.1109/CVPRW.2017.238 August 2017. https://staff.fnwi.uva.nl/t.e.j.mensink/publications/bolles17cvprwmf.pdf
- Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and Hao Li “Protecting World Leaders Against Deep Fakes” IEEE, Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 38-45 http://openaccess.thecvf.com/content_CVPRW_2019/papers/Media%20Forensics/Agarwal_Protecting_World_Leaders_Against_Deep_Fakes_CVPRW_2019_paper.pdf
- James Vincent, “Deepfake detection algorithms will never be enough” The Verge, June 2019. https://www.theverge.com/2019/6/27/18715235/deepfake-detection-ai-algorithms-accuracy-will-they-ever-work
- Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. “Generative Adversarial Networks” arXiv:1406.2661 [stat.ML] June 2014. https://arxiv.org/abs/1406.2661
- Nick Dufour and Andrew Gully, “Contributing Data to Deepfake Detection Research” Google AI Blog, September 2019. https://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detection.html
- National Defense Authorization Act for Fiscal Year 2020, H.R. 2500, 116th Congress (2019) https://www.congress.gov/bill/116th-congress/house-bill/2500/text
- Damon Paul Nelson and Matthew Young Pollard Intelligence Authorization Act for Fiscal Years 2018, 2019, and 2020, H.R. 3494, 116th Congress (2019) https://www.govtrack.us/congress/bills/116/hr3494
- “NSF leads federal partners in accelerating the development of transformational, AI-powered innovation” NSF News Release, October 2019. https://www.nsf.gov/news/news_summ.jsp?cntn_id=299329&org=NSF&from=news
- Alexis C. Madrigal, “Sorting the Real Sandy Photos From the Fakes” The Atlantic¸ October 2012. https://www.theatlantic.com/technology/archive/2012/10/sorting-the-real-sandy-photos-from-the-fakes/264243/
- Sara Harrison, “Twitters Disinformation Data Dumps are Helpful—to a Point” Wired, July 2019. https://www.wired.com/story/twitters-disinformation-data-dumps-helpful/
- Darren L. Linvill and Darren L. Linvill, “Troll Factories: Manufacturing Specialized Disinformation on Twitter” Society for Institutional & Organizational Economics Conference Proceedings, 2019. http://pwarren.people.clemson.edu/Troll_Factories_v2_Linvill_Warren.pdf
- Jeffrey Mervis, “Privacy concerns could derail unprecedented plan to use Facebook data to study elections” Science Magazine, September 2019. https://www.sciencemag.org/news/2019/09/privacy-concerns-could-derail-unprecedented-plan-use-facebook-data-study-elections
- Laura Hazard Owen, “WhatsApp limits message forwarding in order to fight ‘misinformation and rumors’” Nieman Lab, January 2019. https://www.niemanlab.org/2019/01/whatsapp-limits-message-forwarding-in-order-to-fight-misinformation-and-rumors/
- Philipe de Freitas Melo, Carolina Coimbra Vieira, Kiran Garimella, Pedro O. S. Vaz de Melo, and Fabr´ıcio Benevenuto. “Can WhatsApp Counter Misinformation by Limiting Message Forwarding?” arXiv:1909.08740v2 [cs.CY] September 2019. https://arxiv.org/pdf/1909.08740.pdf
- Kiran Garimella and Gareth Tyson, “WhatsApp, Doc? A First Look at WhatsApp Public Group Data” arXiv:1804.01473v2 [cs.SI] July 2018. https://arxiv.org/pdf/1804.01473.pdf
- Robert Chesney and Danielle Keats Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security” 2019. 107 California Law Review. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954
- “California makes ‘deepfake’ videos illegal, but law may be hard to enforce” The Gaurdian, October 2019. https://www.theguardian.com/us-news/2019/oct/07/california-makes-deepfake-videos-illegal-but-law-may-be-hard-to-enforce
- K.C. Halm, Ambika Kumar Doran, Jonathan Segal, and Caesar Kalinowski IV. “Two New California Laws Tackle Deepfake Videos in Politics and Porn” October 2019. https://www.dwt.com/insights/2019/10/california-deepfakes-law
- Jennifer Kavanagh and Michael D. Rich, “Truth Decay: A Threat to Policymaking and Democracy” The RAND Corporation, 2018. https://www.rand.org/pubs/research_briefs/RB10002.html