BPEA Spring 2024 conference

LIVE

BPEA Spring 2024 conference
Sections

Commentary

8 best practices for state election officials on AI

March 11, 2024


  • As the proliferation of AI technology creates novel challenges and risks to the administration of elections, state election officials should look to lead the current debates around AI’s deployment and oversight.
  • Election officials should pursue this goal through open dialogues with stakeholders around the safe and transparent use of AI tools as part of elections implementation and through drafting explicit protections against the potential and already observed external misuses of AI in elections.
  • The authors propose eight best practices for state election officials that, while no replacement for comprehensive federal statutory solution, will put a wide range of stakeholders on alert.
Department of Elections workers transport a box of ballots at the San Francisco City Hall voting center during the Super Tuesday primary election in San Francisco, California, U.S. March 5, 2024. Department of Elections workers transport a box of ballots at the San Francisco City Hall voting center during the Super Tuesday primary election in San Francisco, California, U.S. March 5, 2024.
Department of Elections workers transport a box of ballots at the San Francisco City Hall voting center during the Super Tuesday primary election in San Francisco, California, U.S. March 5, 2024. Credit: REUTERS/Loren Elliott

In the lead-up to the 2024 U.S. presidential election, state election officials are grappling with the benefits and dangers of artificial intelligence. Election officials must confront these novel challenges in a regulatory and political environment lacking in uniformity and amid technologies, like generative AI, that are already being abused in election settings.

To defend elections from the dangers of AI now and in the run-up to the November general election, state election officials should look for ways to lead the current debates around AI’s deployment and oversight. To start, these leaders should voluntarily implement measures addressing the use and integration of AI in elections architecture. That starts with having open dialogues with stakeholders around the safe and transparent use of AI tools as part of elections implementation. It also includes drafting explicit protections against the potential and already observed external misuses of AI in elections (that is, by those outside elections offices).

To help frame that discussion for officials and other stakeholders, we in this essay propose eight AI best practices for state election officials. These are steps that officials can act on right now using their existing regulatory authority, without state or federal legislation. Conversations around federal congressional regulation of AI have been slow to move actual legislation, although there are hopes for the new bipartisan House task force to come up with a bill that can become law in 2024. Nevertheless, state election officials can start establishing their own guidance through addressing critical areas of concern. That includes the following recommendations:

  1. Dialogue with voters and the public around potential challenges of AI upfront to present benefits and mitigate risks.
  2. Ensure that humans are always in the loop when it comes to AI-generated content and tools around election matters.
  3. Evaluate AI tools continuously throughout their development, from their design to integration and operation in electoral processes, and place close scrutiny on the procurement of any product/service that relies on AI.
  4. Develop a review and feedback process for AI tools and information campaigns that is updated regularly and disseminated to voters and other stakeholders.
  5. Train staff to use AI responsibly.
  6. Seek collaboration from a broad range of stakeholders in developing approaches to AI.
  7. Test for and mitigate potential AI dangers prior to launching AI tools and services and, when issues emerge, step back to interrogate the problems.
  8. Apply focused oversight on generative AI, especially election-related AI chatbots that can serve to discourage and, in some instances, disenfranchise voters.

1) Dialogue with voters and the public around potential challenges of AI upfront to present benefits and mitigate risks.

Voters have a right to know when AI is used in their election offices, what the potential risks are, and what strategies exist for mitigation when AI goes awry. Election officials should share this educational content through a variety of platforms that appeal to varied constituencies. For example, that may include using social media platforms like TikTok and Instagram to capture younger voters, 42% and 44% of whom regularly consume their news from these platforms. These platforms are thus far underused by election officials in this way: Researchers from the Social Science Research Council (SSRC) found that, during the 2020 general election, only two percent of counties had Instagram or TikTok and only nine percent had Twitter accounts.

Middle-aged and older demographics may be better reached regarding AI election-related issues through Facebook (a platform used most prominently by those ages 30 to 49), email, and mail, or through websites including the AARP. For example, the U.S. Election Assistance Commission and SSRC both highlighted the Seminole County Supervisor of Elections Office in Florida, which employed Facebook’s voting alert feature to provide 67 state counties the ability to instantly share accurate information. Voters who lived in Florida counties where the county supervisor of elections shared election information via Facebook were found more likely to register to vote. The same tactic should be used for spreading AI-related election information. For voters without social media or home internet access, a separate information campaign might be waged with traditional paper mailers. One such campaign was completed in Paulding, Georgia where a voter education guide was mailed to every household with registered voters. Materials on AI and elections should also be produced in multiple languages to capture non-English speaking voters.

Planned and strategic voter education campaigns can mitigate the effects of mis- and disinformation generated by AI tools. Several states have had success waging voter education campaigns about elections in general. Missouri launched a “Love your Ballot” initiative to pre-empt common ballot mistakes, through which they released eight videos on Facebook, Twitter, Instagram, and TikTok addressing common errors. Ohio’s “Behind the Ballot” initiative brought voters to a behind-the-scenes tour of an active election office, where they saw recruitment, training, accuracy testing, and auditing as a means of increasing voter confidence in the election. Officials should implement similar initiatives that center their office’s use of AI. Another potential campaign could see trained staff monitoring social media and leading an online response aimed at pre-bunking and debunking AI misinformation. If implemented effectively, these campaigns could increase voter education by providing authoritative information on AI prior to Election Day.

2) Ensure that humans are always in the loop when it comes to AI-generated content and tools around election-matters.

Election administrators are beginning to use AI for algorithmic decision-making—making choices using AI or otherwise automated systems—in election contexts. Specifically, AI systems are being used by election offices to verify mail-in ballots, purge voter records, and draw district lines. Trained staff should be involved in all stages of election decisions made with any involvement of AI systems. Especially as AI captures preexisting human biases, oversight to protect the votes of historically disenfranchised and other more vulnerable groups is key. A study in Wisconsin, for instance, showed that during AI list maintenance, Black voters were more than twice as likely and Hispanic voters almost twice as likely to be mistakenly flagged as ineligible to vote than white voters. According to the Brennan Center for Justice, such documented cases of bias that could result in someone incorrectly “being removed from the voter rolls, being denied the ability to cast a ballot, or not having their vote counted.” Election officials should therefore review all recommendations made by AI to preempt and prevent such errors. Even with human oversight, election officials should also develop clear and executable contingency plans in the case of an AI misfire.

3) Evaluate AI tools continuously throughout their development, from their design to integration and operation in the electoral processes, and place close scrutiny on the procurement of any other product/service that relies on AI.

Election officials must act with care when selecting which AI tools to utilize. Officials should test and/or pilot the efficacy of AI software prior to implementation, and reliable vendors should promote the quality, privacy, and security of their products. Because AI models are developed using external training data that may include potential biases, election officials should require that vendors disclose the data used to design their products. If possible, officials should also request that vendors complete any demonstrations of their technology in context of state and local voter data. These prerequisites to purchase will give officials a clearer picture of how accurately the system will perform and the ability to predict any potential errors from data collection to terminal breakdowns of voting. State election officials should also verify that they (and not the vendor) will maintain control and ownership of all data the system processes post-purchase, especially at a time where inconsistent data privacy laws exist across state lines.

4) Develop a review and feedback process for AI tools and uses that should be updated regularly and disseminated to voters and other stakeholders.

State election officials should implement a well-established feedback and review protocol for any AI tools utilized as part of their operations. For instance, voters could receive false information from an official chatbot that sends them to the wrong polling place. The voter may then need to cast a provisional ballot, which allows those whose voter qualifications or registration are in question to cast their vote and is then reviewed by the local electoral board to determine whether the vote will count. Officials should seek to develop a transparent appeals process for voters who feel misled by inaccurate AI-generated information from the election office, so the voter can explain the situation and ensure their vote is counted. In addition, state election officials should consistently solicit feedback and implement feasible suggestions from both staff and constituents on AI systems. A feedback form on the office’s website or survey could serve this purpose.

5) Train staff to use AI responsibly.

Election offices should train their staff to use and deploy AI tools responsibly. While using AI can be beneficial to election offices, state officials should consider both realistic needs for hiring new staff trained in AI and what additional training(s) may be necessary to ensure existing staff are sufficiently educated on this technology. Officials should weigh those needs to determine to what extent AI should be integrated into their processes and protocols.

All staff should be trained in AI usage, effective communications regarding AI in customer service contexts, mitigation of common AI risks (including potential biases), and methods to maintain human involvement in decision-making. Staff should also be trained to appropriately engage vendors on AI products and services. Some such trainings already exist for government employees at low or no cost:

Additionally, at least some election staff should receive specialized training in the recognition of AI-generated content, with a special focus on deepfakes, voice cloning, and other techniques likely to be used by bad actors, and counter-disinformation tactics. This might constitute what states could consider “Election Digital Navigators,” whose main responsibilities would be daily threat mitigation using AI-identification mechanisms developed by private corporations themselves. For example, OpenAI plans to introduce tools that credential images that originate from DALL·E, an AI system that can generate realistic images from descriptions. This and other such tools should be considered for use by state election officials, especially in the months leading up to the general election.

6) Seek collaboration from a broad range of stakeholders in developing approaches to AI.

Election offices should collaborate with internal and external partners when developing their approaches to AI from independent and evidence-based sources and experts. Gathering data and existing research on election interference, voter disenfranchisement, and other historic ways that domestic and international elections have been sabotaged will be critical to the future of democracy.

State election officials trained in AI and elections should also partner with local and federal partners. Minnesota Secretary of State Steve Simon said at a Senate hearing in September 2023 that his office has worked with such partners “to monitor and respond to inaccuracies that could morph into conspiracy theories on election-related topics.” In concert with partners, election staff should undertake careful deliberation about which inaccuracies merit an official response to avoid drawing undue attention to otherwise low-impact misrepresentations.

Within a state, well-resourced larger counties should partner with smaller jurisdictions to trade knowledge and feedback on AI approaches. Larger counties should provide talking points, templates, and other AI-related tools to their less-resourced peers.

7) Test for and mitigate potential AI dangers prior to launching AI tools and services, and when issues emerge, step back to interrogate the problems.

To preempt AI-generated crises, state election officials should also run tabletop simulations to game out potential scenarios in the run-up to elections. For example, Arizona Secretary of State Adrian Fontes is holding multiple training simulations with staff and elected county officials using manufactured deepfakes. This process is helpful to test both the types of disinformation that may spread and what preventative steps can be taken to address the issue. It also better engages local oversight of general and local election processes.

8) Apply focused oversight on generative AI, especially for election-related chatbots that can serve to discourage, and in some instances, disenfranchise voters.

Just as voters are increasingly relying on AI chatbots to seek election information, so too are election officials using AI systems like chatbots on their websites to answer election-related voter inquiries. While AI can shorten staff response windows and reduce staff time spent answering queries, chatbots can also provide harmful misinformation. For instance, a recent study found that one popular chatbot inaccurately responded to one out of three basic questions about candidates, polls, and voting.

Election officials should therefore preemptively stand up credible sources of information in all communications around AI and elections, as the National Association of Secretaries of State’s (NASS) “Trusted Sources” project has done. The project serves as a central source of credible election information that directs voters to frequently asked questions or election fact sheets created by their election offices.

Officials should also ensure that chatbots direct users to authoritative and reliable sources. For example, NASS has partnered with OpenAI to refer ChatGPT users seeking election information to CanIVote.org. State election offices have and should continue to establish similar protocols. California’s election website chatbot, Sam, also minimizes potential misinformation by directing voters to linked pages when answering election-related questions.

Conclusion

These eight steps are the beginning of ensuring more seamless elections through appropriately utilizing AI while hedging the risks that this new technology may bring. These steps should be implemented now and are within the authority of state and local election officials to undertake. Doing so may not be as far reaching as a comprehensive federal statutory solution, but we should not let the best be the enemy of the good. Following these best practices will put a wide range of stakeholders, from voters to malicious election interferers, on alert.

Authors

  • Acknowledgements and disclosures

    Meta is a general, unrestricted donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.