Sections

Commentary

A policy framework to govern the use of generative AI in political ads

Matt Perault and
PERAULT
Matt Perault Director, Center on Technology Policy - UNC-Chapel Hill
J. Scott Babwah Brennen
Scott Babwah Brennan
J. Scott Babwah Brennen Head of Online Expression Policy, Center on Technology Policy - University of North Carolina-Chapel Hill

December 11, 2023


  • A recent report from the Center on Technology Policy at UNC-Chapel Hill investigates the potential harms of the use of generative AI in political ads and explores a policy framework.
  • AI-generated political ads may amplify bias, and their impact may be stronger on smaller, down-ballot races.
  • Policymakers have proposed watermarks and disclaimers as potential solutions, but there is little evidence that these tools will successfully address the potential harms of synthetic media in political advertising.
  • Instead, policy interventions should specifically target electoral harms, not technologies, and promote education about the use of GenAI in political ads.
Rocks hold down a pile of literature promoting Florida Governor Ron DeSantis, from his "Never Back Down" political action committee, outside the venue where he kicked off his campaign for the 2024 Republican U.S. presidential nomination with an evening campaign rally in West Des Moines, Iowa, U.S. May 30, 2023.
Rocks hold down a pile of literature promoting Florida Governor Ron DeSantis, from his "Never Back Down" political action committee, outside the venue where he kicked his campaign for the 2024 Republican U.S. presidential nomination with an evening campaign rally in West Des Moines, Iowa, U.S. May 30, 2023. Credit: REUTERS/Scott Morgan

As we prepare for the 2024 election and the first presidential election where generative AI (GenAI) technologies will likely be in widespread use, should we be concerned about the increasing use of generative AI in political ads? If so, what should we do about it?

Earlier this month, the Center on Technology Policy at UNC-Chapel Hill released a report that sought to answer these questions. Our goal was to examine the academic literature on potential harms and use this analysis to develop a policy framework to govern the use of GenAI in political ads.

Over the last year, amid a trickle of examples of GenAI in political ads, some commentators have been raising alarms about how GenAI might help decrease the costs of producing photo-realistic deceptive content, increasing the scale, personalization, authenticity, and persuasiveness of deceptive ads.

Broadly, we found that the existing—albeit limited—academic literature suggests that some of these concerns have been overstated, while others have been understated. There is little evidence that either political ads or single pieces of online misinformation have strong capacity to change people’s minds, nor is there much evidence to suggest GenAI will change their impact.

But we also found that smaller, down-ballot races may be more susceptible to the impact of political ads, since there’s often much less advertising in these races, voters are less familiar with the candidates, and there’s less oversight and media attention. The academic literature also supports concerns that AI-generated political ads may amplify bias.

So far, policymakers have responded to the alleged concerns about GenAI by proposing mandatory watermarks on GenAI content, disclaimers on political ads using GenAI, and outright bans on deceptive GenAI content in political ads. Yet, there remains little evidence that watermarks or disclaimers may successfully address these harms.

Instead, we propose that policymakers consider public policy options that are more rooted in what we can learn from the academic literature on these issues. Our recommendations focus on two main concepts: First, public policy should target electoral harms, not technologies, and second, public policy should promote learning about the use of GenAI in political ads. Ultimately, we argue that it makes little difference if a piece of deceptive content was created by a generative AI model, or by photoshop. Instead, we should focus on the underlying electoral harms that advertising may pose.

To target electoral harms, policymakers should consider several options. First, Congress and states should outlaw voter suppression. There is currently no federal law prohibiting voter suppression, and several states lack them as well. Second, governments should allocate funding for law enforcement to enforce existing civil rights law. The Department of Justice’s Civil Rights Division is charged with enforcing several civil rights statutes. To support their work, the government can provide them with resources they need to address the use of GenAI to discriminate and deny people their rights.

Third, local and state governments should “flood the zone” with factual content. Increasing the volume of high-quality content in local races might diminish the potential impact of false and low-quality content. Just as GenAI can be used to spread disinformation, it can also be used to develop and distribute accurate information at scale.

Fourth, governments should help to prepare people for a world with more and more synthetic content, such as deepfakes and manipulated images. To arm people with the skills they need to better understand what they are seeing and reading, policymakers should fund digital literacy programs that are focused on detecting and contextualizing false online content.

The Federal Election Commission (FEC) can play an important role in educating people about the use of GenAI technology in political ads as well. Our fifth recommendation is for the FEC to publish guidance for political advertisers on identifying and mitigating bias in political ads, focusing specifically on bias introduced by GenAI models.

We also propose several recommendations that will improve our understanding of the use and impact of GenAI in political ads. Since the research remains incomplete and inconclusive, more learning will provide us with data that will enable us to develop better AI governance in the long run.

We propose funding empirical studies on the impact of GenAI in political ads and the effectiveness of GenAI-related interventions, creating policy experiments to test interventions aimed at mitigating the negative impact of GenAI in political ads, and conducting studies on the strengths and weaknesses of various regimes for imposing liability for GenAI content.

We also suggest two recommendations that are focused on increasing transparency so that researchers and the press can monitor and study political advertising more effectively. Federal and state governments should publish political ad archives so that people can see advertisements across a wide range of media, not just tech platforms. And the FEC should require that campaigns report their vendors’ advertising spending, which will provide a much more comprehensive picture of where campaigns are advertising.

As the 2024 election approaches, we should implement a policy framework that enables us to mitigate known risks and address gaps in our current understanding of GenAI. The goal is to ensure that innovation serves the democratic process without undermining it.

Authors