In the absence of congressional legislation on artificial intelligence (AI), states are beginning to write their own proposals designed to quell its harmful consequences. Due to the unsettled nature of federal policymaking, states are moving forward with actions on privacy, bias, discrimination, safety, and security, among other topics. The White House’s recent Executive Order on AI is a useful step forward, but its full implementation is going to require national legislation that implements the October 2022 Blueprint for an AI Bill of Rights, the January 2023 NIST AI Management Framework, and insights derived from Senate Majority Leader Chuck Schumer’s AI Insights Forum. While it is clear that federal lawmakers and company executives alike are eager to advance the U.S.’s AI capabilities and international leadership, their failure to enact new legislation is encouraging states to take their own actions on AI oversight and regulation.
State and local action
In 2023, state-led bills on AI have increased dramatically. According to the Software Alliance, there were over 440% more AI-related bills introduced by state lawmakers in 2023 compared to the previous year. Governors in states like California, Pennsylvania, and New Jersey have issued executive orders stood up by task forces and similar local bodies exploring research standards and regulatory frameworks. Among other states, Texas, Connecticut, and Illinois have passed legislation on AI while New York has seen an increasing number of AI bills advancing through its legislature. Maine issued a 6-month moratorium on generative AI use in government agencies.
Municipalities, such as those in Seattle and New York City, have also been active on the AI governance front. Seattle recently released a Generative Artificial Intelligence Policy to align with priorities put forth by President Biden’s AI Executive Order in November 2023. In July 2023, New York City’s Automated Employment Decision Tool law went into effect, and the city recently made plans to issue guidance for generative AI use in the local government.
Taken together, these state and local activities demonstrate the ability of those jurisdictions to take their own actions. In many cases, they are reaching out to the expert community to help them better understand the implications of AI. Many of the various state bills do not focus on any one AI problem, but rather address a wide array of concerns, from traffic safety to employment.
States and municipalities in particular are surfacing debates on the regulation of particular products in the AI ecosystem, including generative AI, facial recognition technologies, and the use of AI in automated telephone services. For example, New York City Mayor Eric Adams has received pushback for using AI to “robocall” residents. The tool was originally employed by city agencies to communicate more effectively with immigrant populations in various languages but has led to these groups being misled. Meanwhile, the city of Amarillo, Texas is working with Dell Technologies to develop a digital assistant powered by generative AI, that will be designed with the city’s identity, tone of voice, and knowledge, to answer residents’ questions about public services in multiple languages. It is not yet known if the technology will incur additional risks and consumer harms. These and other examples highlight how municipalities will be critical to future federal legislation, especially as they experiment with more frontier models to deliver city services.
Frontier AI models
Lawmakers also are moving to address “frontier” AI systems, which are highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety, as defined by OpenAI in this paper. For example, a California bill proposed that systems requiring a certain quantity of computing power to train them should be subject to transparency requirements, although the specific threshold was not specified. Additionally, New York’s Advanced Artificial Intelligence Licensing Act seeks to require “registration and licensing of high-risk advanced artificial intelligence systems” (A.B. 8195, 2023-2024 Legislative Session, NY). State regulation to restrict deepfakes have also become more common, especially as they are tied to election security, such as in New York and New Jersey.
Persistent congressional inaction will generate state action
Because the internet crosses state lines, it is critical that the federal government considers and passes comprehensive AI legislation that ensures safe, trustworthy, and secure products and services before it gets harder to unbundle state and local activities. It will not be productive if the United States ends up with 50 different sets of rules based on state jurisdictions. Companies are not going to want to have different algorithms for Texas and New York.
But if Congress does not act, states are going to move forward with their own provisions on overseeing AI. They are not waiting for the federal government to move, but rather are designing their own rules. They are considering whether to create new AI-related agencies, soliciting public comment on draft rules, and viewing their AI leadership as an opportunity to stand out among peers on the national and global levels. With Congress set to recess, without any significant AI legislation, states are not going to sit back and do nothing. They do not have the partisan barriers present in Washington, D.C. because a number of states are controlled by one or the other party and in a strong position to move legislation. Since no one knows how long it will take Congress to get its act together, states and localities are going to find their own paths to address AI’s pitfalls and biases.