AI can strengthen U.S. democracy—and weaken it

November 21, 2023

  • Rapid advancements in artificial intelligence (AI) technologies have the potential to transform democratic governance and its execution.
  • AI-powered tools could assist in the administering of elections, but their capability to synthesize vast amounts of data and disseminate mis- and disinformation could also pose risks to election integrity.
  • AI has many opportunities to democratize the campaign playing field, but it could also turbocharge preexisting election interference tactics.
  • Policymakers, advocates, and citizens must keep up with continuing technological advancements to minimize AI’s disruptive effects and maximize its positive democratic potential.
People stand behind a privacy booths to fill out their ballot before casting their votes for the 2022 midterm election, at P.S. 101 in the Queens borough of New York, NY, November 8, 2022.
People stand behind a privacy booths to fill out their ballot before casting their votes for the 2022 midterm election, at P.S. 101 in the Queens borough of New York, NY, November 8, 2022. (Photo by Anthony Behar/Sipa USA)No Use Germany.

Following Senate Majority Leader Chuck Schumer’s (D-NY) latest AI Insight Forum, which focused on elections and good governance, all eyes are on the developing intersection of artificial intelligence (AI) and democracy. Complementing the bipartisan Senate forum series are the Biden administration’s recent executive order on AI and Vice President Kamala Harris’ trip to the United Kingdom to attend the AI Safety Summit. This increased focus on AI comes at a time of heightened attention to the state of democracy in both the United States and globally. In this first part of a new series on the risks and possibilities of the confluence between AI and democracy, we provide an overview of three principal areas where AI may transform democratic governance and its execution. Subsequent installments of the series will offer deeper dives into these topics and policy recommendations for lawmakers.

Election administration

AI could assist election officials and workers in their critical efforts to oversee the polls, whether in the United States or in democracies around the world. For example, AI could revamp election administration processes to make them more efficient, reliable, and secure. When monitored carefully, AI could identify concerning anomalies in voter lists and voting machines to preempt uncover fraud or disenfranchisement. AI-powered tabulators could scan paper ballots more quickly than poll workers, thereby reducing the time necessary to report election results or to conduct recounts. That quicker clip could help quiet accusations of fraud during close, contentious races like those seen in the wake of the 2020 election, when an influx of mail-in ballots necessitated multi-day counts in several pivotal states.

But election administrators must be wary of possible risks. AI’s capacity to locate and synthesize vast amounts of public data can generate phishing attacks tailored to election officials whose contact information exists in the public domain. If these officials have privileged access to sensitive voter and government data, the integrity of the elections they oversee may be jeopardized if their personal information and administrative duties are exploited by malware or ransomware. AI models could also be written to further suppress and disenfranchise voters by disseminating misinformation or disinformation, particularly to less-informed citizens who may be more vulnerable to baseless election fraud narratives. There also could be partisan biases in the way voter rolls are “cleaned up” using AI, with minority voters being disproportionately targeted.

Campaigns and voter education

AI is already altering the way candidates for elected office conduct their campaigns in the United States and in other democracies around the globe. New technologies are also reshaping—for better and for worse—how voters locate and consume information about candidates and issues. These shifts present opportunities and risks at every stage leading up to Election Day.

Opportunities for AI to democratize, improve, and level the campaign playing field abound. AI tools could lower financial barriers to entry for first-time and underfunded candidates. Digital fundraising mechanisms could benefit from AI-powered streamlining. Candidates may also avail themselves of targeted advertisements that more effectively reach undecided voters. That has the knock-on effect of better educating the electorate about their options at the ballot box. AI can be used effectively and transparently by election administration officials to track and report on harmful hate speech that unfairly tilts the playing field for candidates and impacts voter decision making.

Conversely, AI may worsen the flood of misinformation and disinformation now typical of election season. AI equips illiberal nonstate actors and autocracies with an array of comparatively low-cost, unmanned tools by which adversaries may pry the electorate further apart, fueling caustic polarization and internal destabilization. Political bots, deepfakes, and other AI-generated visuals have already scrambled pre-election information ecosystems in democracies across the globe. AI’s capacity to rapidly generate “pink slime,” news sites comprised completely of fake news, exhibits its potential to turbocharge preexisting election interference tactics. The risk of AI-fueled informational chaos grows more acute as many democracies march towards high-stakes elections in 2024.

Citizen engagement and participation

In addition to elections and voting, other dimensions of democratic governance stand to benefit and face challenges from the AI revolution, domestically and internationally. New technologies will help citizens voice their opinions, organize others with similar alignments, and act on their priorities beyond the ballot box. However, those same technologies may also allow bad actors to camouflage their machinations as genuine public sentiment.

AI could further democratize the public comment process, a cornerstone of public influence over policy- and rulemaking. Machine learning can collect and summarize an individual’s interests and may eventually match those priorities with specific issues on which regulatory agencies are receiving public comment. Generative AI can assist both activists and seasoned politicians at the national and local levels to make their comments more persuasive for various audiences. Such technology  may also bolster citizens’ understanding of complex legislation that their elected officials are considering by simplifying legislative text and tracking their representatives’ votes.

The flip side is that advocacy groups or individuals looking to misrepresent public opinion may find an ally in AI. AI-fueled programs, like ChatGPT, can fabricate letters to elected officials, public comments, and other written endorsements of specific bills or positions that are often difficult to distinguish from those written by actual constituents. These fabrications—and the speed and volume at which they can be created—may be used to generate the appearance of public consensus on a given issue and pressure legislators to act on a desired agenda. Much worse, voice and image replicas harnessed from generative AI tools can also mimic candidates and elected officials. These tactics could give rise to voter confusion and degrade confidence in the electoral process if voters become aware of such scams.


The stakes are high to identify emerging risks and rewards at the confluence of AI and democracy. Next year, highly consequential elections will take place in the United States and in countries around the world, together representing more than 3.5 billion people. Anti-democratic actors and autocrats will seek every opportunity to shake confidence in democracy, targeting the systems that ensure free and fair elections and good governance.

Democratic governments, policymakers, and election advocates are responding with best practices and necessary words of warning. The Biden administration’s aforementioned AI Executive Order has been lauded as a promising starting point for a top-down approach to these new technologies. AI-friendly and -wary lawmakers alike have begun setting their sights on a bipartisan AI regulatory regime. And civil society has an important role to play, demonstrated by the work of organizations like the Leadership Conference on Civil and Human Rights, the Lawyers Committee for Civil Rights under Law, the Brennan Center for Justice, Public Citizen, and Brookings (which does its own extensive work on the subject). They, along with many others, are marshalling their technical expertise and commitment to democracy to provide cutting-edge guidance to local, state, and federal regulators.

Minimizing the disruptive effects of the AI revolution and maximizing its positive democratic potential is an imperative for the upcoming 2024 U.S. presidential election and other similarly pivotal contests. But the need for effective, transparent strategies and guidelines goes beyond next year. AI-powered tools are in their infancy. Their impacts will likely reach every corner and function of government, from how agencies collect data to how elections are run to how voters register to vote and cast their ballots. As technologies evolve, policymakers, advocates, and citizens will need to keep up to ensure AI is leveraged as a force for a better and more inclusive democracy.