Sections

Commentary

How different states are approaching AI

Gregory S. Dawson,
Gregory S Dawson
Gregory S. Dawson Clinical Professor, the W. P. Carey School of Business - Arizona State University
Kevin C. Desouza, James S. Denford, and
James S. Denford
James S. Denford Professor, Management Department - Royal Military College of Canada
Marc Barda Picavet
Marc Barda Picavet Faculty of Business and Law - Queensland University of Technology

August 18, 2025


  • States across the country are legislating on AI, and an analysis of bills reveals some areas of bipartisan interest and other diverging approaches.
  • There are some major themes bills from this year seek to address, including deepfakes, government use of AI, nonconsensual intimate imagery, and automated decisionmaking.
  • Though bills vary, there is an underlying effort to protect citizens from the overreach of AI, but efforts at the federal level may threaten this progress.
A general view of the Maryland State House, in Annapolis, Maryland, on Friday, October 4, 2024.
A general view of the Maryland State House, in Annapolis, Maryland, on Friday, October 4, 2024. (Graeme Sloan/Sipa USA)

It is no surprise that nations are embracing and legislating artificial intelligence. At last count, 54 countries’ governments, ranging from China, India, and the United States to Uganda, Armenia, and Latvia, have published their own national AI plans. These plans have run the gamut from plans that are highly focused on defense-related initiatives (e.g., China) to those who are more focused on societal betterment (e.g., Switzerland). Some plans are concerned with broad governance and data privacy guidance while other plans largely ignore such guardrails. Our team’s earlier series of posts detailed how countries were approaching national AI plans, as well as how to interpret those plans. In our concluding paper in that series, we identified how the United States could dominate national AI success by applying lessons from the U.S. space race to invigorate people development, adopting a consortium model for AI development, and creating a partnership with another country to share talents and assets. We are gratified to see the United States moving in that direction. In terms of human talent development, the “Computer Science For All Act,” introduced in 2023, would require that the Department of Education award grants to various entities in computer science education. An example of one such consortium is the Partnership for Global Inclusivity on AI (PGIAI), which brings together the U.S. Department of State, major U.S. tech companies, and international partners for the responsible development of AI, particularly to address quality of life issues in developing countries. Finally, under the former Biden administration, the United States developed a partnership with India (which was our specific partnership recommendation) for sharing AI infrastructure investments. 

As part of our research, our focus shifted to how the United States is approaching AI in another series of posts using data provided to us by Leadership Connect. Here, we analyzed how the U.S. federal government was spending funds in pursuit of AI and found a widely dispersed purchasing landscape with no real primary focus. It was the classic hallmark of a new, but immature market. In our most recent post on that topic, we analyzed hundreds of federal government contracts and found a sharp rise in AI spending and a maturing market but with massive focus on the U.S. Department of Defense (DoD). As we noted in that paper, “DoD grew their AI investment to such a degree that all other agencies became a rounding error.” 

The race to implement AI has now shifted to the states and, at present, 34 states are studying AI, including 24 states that have created a group to study AI and another 10 states have delegated the task to a standing committee. In addition, according to tracking by the Brookings Center for Technology Innovation, 47 states have introduced AI-related legislation in 2025. New York has introduced the most legislation, but Texas has passed the most (see Table 1). As such, we believe the time has come to analyze the state of the states as it relates to AI, and this is our first of a multi-part series of papers on the topic.

Table 1

Our data 

Our team used data from the Brookings Center for Technology Innovation for this analysis which is current as of June 27, 2025. A total of 260 measures related to AI were introduced in the 2025 legislative session, of which 22 have been passed. The remaining bills are pending or rejected.  

When examining the language in summaries of the proposed legislation, it is immediately obvious that the focus of the legislation is protection for citizens as evidenced by two of the most frequently used words in many of the proposed bills: prohibit and disclosure. Given this, it seems clear that, at present, the states see the misuse of AI as something that citizens need to be protected against versus the appropriate use of AI as an opportunity for better services to citizens.  

Analysis 

Analysis of the 2025 data show roughly two-thirds (173) of state AI bills were introduced by Democrats, compared to about one-third (84) by Republicans. Only three—in Minnesota, New Jersey, and Tennessee—were bipartisan efforts. Overall, Democrats have been more proactive in pushing AI governance, consistent with their broader tech-regulation tendencies. Republicans have also put forward many AI bills, often on issues like banning certain harmful uses of AI or promoting innovation. Election deepfake bans and child pornography, for example, saw bipartisan interest. On the other hand, sweeping regulatory bills (imposing strict obligations on AI developers or businesses) have mostly come from Democrats in states like California, New York, and New Jersey. Republican-led states often favored lighter-touch approaches, with Texas failing to pass a high percentage of its proposed bills. 

We first analyzed a few of the major themes in the 2025 bills:  

  • Nonconsensual intimate imagery (NCII)/Child sexual abuse material (CSAM): 53 bills introduced and 0 currently signed into law 
  • Elections: 33 bills introduced and 0 currently signed into law 
  • Generative AI transparency: 31 bills introduced and 2 currently signed into law 
  • Automated decision-making technology (ADMT)/High-risk AI: 29 bills introduced and 2 currently signed into law 
  • Government use: 22 bills introduced and 4 currently signed into law 
  • Employment: 13 bills introduced and 6 currently signed into law 
  • Health: 12 bills introduced and 2 currently signed into law 

Some states introduced comprehensive AI governance bills, covering broad obligations for AI developers or government use, while others focused on narrow sector-specific or issue-specific bills. For instance, Colorado’s “Consumer Protections for Artificial Intelligence” bill created a wide-ranging regime for “high-risk” AI systems, requiring developers and users of such AI to implement transparency, monitoring, and anti-discrimination measures. In contrast, in 2024, California started to pursue a patchwork of many targeted laws addressing things like election deepfakes, AI-generated content warnings, digital replicas of performers (including one bill for deceased performers), and training-data disclosure, rather than one overarching framework. It is worth noting that California originally pursued a comprehensive approach, but it was vetoed by the governor. As a result, California reverted to the more piecemeal approach. These different approaches are noted by analysts as a major difference between state strategies. Also, some smaller states have primarily symbolic or exploratory bills (e.g., creating AI task forces or study commissions rather than imposing rules).  

The next section of the blog goes into detail on the various proposals under the respective categories. 

NCII/CSAM 

Not surprisingly, two major topics considered by legislators in 2025 were NCII and CSAM. A Maryland NCII bill illustrates the intent of take-it-down legislation (the TAKE IT DOWN Act) as it requires certain online platforms to establish a process for individuals to request removal of NCII, including synthetic NCII. A Mississippi NCII bill prohibits knowingly—and with an intent to injure—publishing or disseminating synthetic NCII or deepfakes of another person. A number of other states, including Maryland, have similar bills. The New Mexico bill prohibits non-consensual public dissemination of synthetic content that depicts a person engaging in conduct that they didn’t do and the person is identifiable. A total of 16 states proposed bills on this topic and, as to be expected, all the bills proposed significant penalties for those engaging in these acts. Interestingly, most of these bills have already died in committee.  

Elections 

Elections were a hot button issue in bills that were introduced in 2025, and most of the prohibitions were around candidates needing to disclose if AI was used to craft an advertisement. For example, a bill in New York seeks to require any political communications that use “synthetic content” to disclose its use. While these examples cover use cases where candidates seek to improve their standing, there is also legislation focused on opponents using AI to damage it. For example, a Massachusetts bill seeks to prohibit purposefully deceiving people and injuring a political candidate or influencing an election by creating, publishing, or distributing a deepfake video or an “altered image” 90 days prior to an election. Both of these bills are still in committee and are expected to die there.  

Generative AI transparency 

Generative AI transparency was closely tracked as well in 2025 and the most common fear that the bills addressed was the concern that a person might not know that they were communicating with a chatbot. For example, Hawaii proposed legislation that would require corporations, organizations, and individuals engaged in commercial transactions to clearly and conspicuously notify a consumer if the consumer is interacting with a chatbot or other technology capable of mimicking human behaviors. A similar bill was proposed in Massachusetts. The bill also required the providers to have a red team that would determine if the watermarks could be easily removed and would require them to report those findings to the state’s attorney general. The Hawaii and New Mexico bills have died on the floor, but the Massachusetts bill is still in committee.  

ADMT/high-risk AI 

ADMT and high-risk AI were also major topics covered by legislative bills. The crux of these bills was recognition that safeguards are needed to protect citizens from unintended consequences of AI systems. Most of the bills from this session were styled after Colorado’s AI Act, which was signed into law in 2024 and is set to go into effect in 2026. The Colorado legislation focuses on algorithmic discrimination in “consequential decisions,” requires identification of the use of AI as a “substantial factor” in such decisions, imposes a duty of care, mandates transparency and accountability for developers and deployers, and affirms consumers’ right to an explanation of AI’s role in decision making. Legislation in a number of states, including Georgia, Illinois, Iowa, and Maryland, is closely modeled after the Colorado bill. However, the Georgia, Iowa and Maryland bills all died in committee and the Illinois bill remains in committee.  

Government use 

This category covered the waterfront in terms of government use of AI. For example, legislation in Georgia proposed an “AI Accountability Act” that would create a Georgia Board for Artificial Intelligence which would require government entities to develop an AI usage plan to detail specific goals, data privacy measures, and outline the role of human oversight. In contrast, a Montana bill, which was recently signed by the governor, would limit the use of AI by state and local government, would require disclosure of the use of AI systems and would require certain decisions and recommendations to be reviewed by a human in a “responsible position.” Finally, a Nevada bill would require the Department of Taxation to inform the taxpayer if they are communicating using AI systems. The majority of the bills were protecting citizens from the state using AI to make any governing decisions. At present, the Georgia and Nevada bills died in committee.  

Employment 

There were a number of AI-related bills surrounding employment and all of them were focused on protection of employees from AI decision making. For example, an Illinois bill (currently in the assignments committee) requires employers to notify applicants if AI is used as part of the interviewing and decision-making process. A similar bill is under consideration in Pennsylvania. A number of states, including California, also put limitations on the use of AI-based workplace surveillance. 

Health 

Health care was a major focus of legislation, with all bills focused on the potential issues arising from AI systems making treatment and coverage decisions. For example, California’s bill was focused on protecting Californians from AI systems and “prohibits the use of specified terms, letters, or phrases to indicate or imply possession of a license or certificate to practice a health care profession.” A bill in Illinois, which is awaiting the governor’s signature, would outright prohibit licensed health care professionals from using AI to make therapeutic decisions, interacting with clients, or creating generative therapeutic recommendations or treatment plans. Other states, such as Indiana, did not go as far and merely required health care professionals to disclose to a patient if AI was used to make a health care decision or was used in communicating health care decisions. The Indiana bill placed similar requirements on health care insurers. 

Concluding thoughts 

The majority of bills proposed in 2025 by states were focused on protecting citizens from the overreach of AI. States recognize the need to act on AI, albeit they are taking a variety of approaches. Still, common ground is found in areas where tangible harm is seen or widely considered a problem (i.e. CSAM, NCII, and elections).  

There is a storm cloud over this progress at the federal level, and it is unclear how this will affect how states are addressing AI. Republicans in the U.S. Senate recently proposed a 10-year moratorium on states enacting any laws that regulate AI, though it was eventually dropped. Still, several lawmakers supported the measure, and we believe that this battle is just beginning, and it is unclear what direction Congress will eventually go. However, it is clear that the federal government is planning on keeping a close eye on state legislation and the federal AI Action Plan has tasked the FCC with monitoring the impact of state laws.  

Until and unless prohibited by federal law, we expect states to continue pressing hard on AI legislation to address their citizens’ concerns. The topic is simply too pressing for the states to stand by while the federal government tries to figure it out. Our goal in this series of papers is to better understand how and why state governments are addressing AI and to create a playbook for doing it successfully.   

Authors

  • Footnotes
    1. Different tracking organizations use different criterion for deciding what is considered an AI-related bill and so counts from different sources may somewhat vary.
    2. As of June 27, 2025.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).