Sections

Commentary

How California and other states are tackling AI legislation

A general overall aerial view of the California State Capitol building, Saturday, Dec. 24, 2022, in Sacramento, Calif. (Photo by Image of Sport/Sipa USA)No Use Germany.

Last week, California State Assemblymember Rebecca Bauer-Kahan introduced a bill to combat algorithmic discrimination in the use of automated tools that make consequential decisions. And California is not alone—a new wave of state legislation is taking on artificial intelligence (AI) regulation, raising key questions about how best to design and implement these laws. Generally, the bills introduce new protections when AI or other automated systems are used to help make consequential decisions—whether a worker receives a bonus, a student gets into college, or a senior receives their public benefits. These systems, often working opaquely, are increasingly used in a wide variety of impactful settings. As motivation, Assemblymember Bauer-Kahan’s office cites demonstrated algorithmic harms in healthcare, housing advertising, and hiring, and there have unfortunately been many other such instances of harm.

AI regulation in the United States is still quite nascent. Congress has passed important bills focused on government AI systems. While the Trump administration issued two relevant executive orders, these oversight efforts have so far been largely ineffectual. In 2022, the Biden administration issued voluntary guidance through its Blueprint for an AI Bill of Rights, which encourages agencies to move AI principles into practice. This White House has also issued two executive orders asking agencies to focus on equity in their work, including by taking action against algorithmic discrimination. Many individual agencies have taken heed and are making progress within their respective jurisdictions. Still, no federal legislation focusing on protecting people from the potential harms of AI and other automated systems appears imminent.

The states, however, are moving ahead. From California to Connecticut and from Illinois to Texas, the laboratories of democracy are starting to take action to protect the public from the potential harms of these technologies. These efforts, coming from both Democratic and Republican lawmakers, are grounded in principles of good governance. Broadly speaking, the state legislative efforts seek to balance stronger protections for their constituents with enabling innovation and commercial use of AI. There is no single model for these efforts, but a few important areas of consensus have emerged, both from the draft bills and from legislation that has already passed.

First, governance should be focused on the impact of algorithmic tools in settings with a significant impact on people’s civil rights, opportunities for advancement, and access to critical services. To this end, while the term ‘artificial intelligence’ is a useful catch-all reference that helps motivate the need for legislative action, it is encouraging that governments are leaving this term aside when defining oversight scope and are focusing instead on critical processes that are being performed or influenced by an algorithm. In doing so, state governments are including any type of algorithm used for the covered process, no matter if it is simple, rules-based, or powered by deep learning. By focusing the attention and governance burden on impact in high-stakes decision making, and not on the particular details of any specific technical tool, innovation can be allowed to flourish while necessary protections remain future-proofed.

Second, there is wide agreement that building in transparency is critical. When using algorithms for important decisions, companies and governments should explicitly inform affected persons (as the California bill requires). Further, public disclosure about which automated tools are implicated in important decisions is a key step in enabling effective governance and engendering public trust. States could require registration of such systems (as the EU plans to do and as a bill in Pennsylvania would require) and further ask for more systemic information, such as details about how algorithms were used, as well as results from a system evaluation and bias assessment. These assessments use transparency to directly tackle the key question about these systems: Do they work, and do they work for everyone?

Making parts of these algorithmic impact assessments public would enable more public accountability and lead to better governance by more informed lawmakers. Algorithmic impact assessments could also improve the functioning of markets for AI tools, which currently suffer from exaggerated promises followed by routine failures. There is growing consensus among states here as well—many states with current draft legislation (including California, Connecticut, the District of Columbia, Indiana, Kentucky, New York, Vermont, and Washington) include required impact assessments, although they vary in the degree of transparency required.

So far, state legislators have reached different decisions about whether to limit their oversight to government uses of these systems or whether to consider other entities within the state, especially the commercial use of algorithms. In California, the bill includes non-governmental uses of automated systems. In Connecticut and Vermont, the focus is exclusively on government use. Focusing only on government algorithms allows compliance with requirements to be handled through internal government guidance and processes, which may make adherence easier in some ways. Holding non-governmental uses to standards that, for example, aim to ensure systems are tested for efficacy and non-discrimination before deployment, begs the question of enforcement. California’s bill includes a private right of action, which enables individuals to file a lawsuit when their rights are violated and is a key protection. But to ensure proactive protections and detailed guidance, a regulatory approach is necessary. For many settings, lawmakers will have to solve the same policy problems regardless of whether they choose to limit their scope to government use or private use. For example, it would make sense for hiring algorithms to be held to the same standards regardless of which entity is doing the hiring.

Some rules about automated decision tools will make sense cross-sector—for instance, the aforementioned disclosure of algorithms to affected persons, or the right to correct errors in data used for important algorithmic decisions. However, many others may require guidance that is specific to the application: Automated decision tools used in healthcare should follow rules crafted based on those particular risks and existing regulations, while systems used in employment face a different risk and regulatory landscape. In addition to ensuring existing sectoral regulations are effectively applied to algorithms, new guidance may need to be issued relating to the use of automated tools in that sector. Existing state agencies are best placed to understand the role and impact of algorithmic systems in their domains and should generally provide such oversight. When possible, an existing health agency should regulate health-related AI, a labor department should regulate employment-related AI, and so on.

Yet this raises a key challenge: State agencies may lack the technical expertise to effectively oversee algorithmic systems. A promising solution is for existing agencies to provide this sector-specific oversight by working jointly with an office with technical expertise. This might be a new AI office, or existing technology office or privacy agency (such has been proposed in Connecticut and implemented in Vermont). This would be an effective short-term solution although, in the long-term, some agencies might benefit from significant in-house expertise in using and regulating algorithmic systems. States might also consider new hiring pathways for AI and data science expertise, as the federal government has done. Additionally, state agencies may lack the explicit authority to issue guidance over the development, deployment, and use of automated decision tools—their authority should be appropriately expanded to reflect the challenges of governing AI.

Some states (including Texas, Maryland, Massachusetts, and Rhode Island) are considering setting the deliberative process in motion by first creating commissions to study the problem and make recommendations, as has previously been done by states including Vermont, Colorado, Alabama, and Washington. This may cause a significant delay in adapting government protections to an already algorithmic world. Instead, state governments should act on two fronts in parallel. Lawmakers should learn about citizens’ concerns while simultaneously adapting state governance to well-understood algorithmic challenges, such as through transparency requirements as well as new agency authority and capacity. Investigations and research can help determine which sectors the state might want to prioritize for investment, training, and regulation. But these inquiries must not distract or delay lawmakers from the important work of protecting their constituents by enacting AI governance legislation that contains policies that already have broad consensus.

While lawmakers will have many considerations that are specific to their state, generally, the most effective state-level AI governance legislation will have the following elements: It 1) includes within its scope any technologies that make, inform, or support critical decision-making, 2) mandates proactive algorithmic impact assessments and transparency surrounding these assessments, 3) covers both government and private sector use, and 4) identifies clear enforcement authority on a sectoral basis, including consideration of a regulatory approach with proactive requirements. These elements will allow state legislators to provide sensible protections for their constituents now and in the future, while encouraging technological innovation.

Authors