Sections

Commentary

How to systemically think about AI regulation

Senator Chuck Schumer (D-N.Y.), the Senate Majority Leader, speaks to media with Senator Mike Rounds (R-S.D.) and Senator Todd Young (R-IN) after the Senate Artificial Intelligence Insight Forum, at the U.S. Capitol, in Washington, D.C., on Tuesday, October 24, 2023. (Graeme Sloan/Sipa USA)No Use Germany.
Senator Chuck Schumer (D-NY), the Senate majority leader, speaks to media with Senator Mike Rounds (R-SD) and Senator Todd Young (R-IN) after the Senate Artificial Intelligence Insight Forum, at the U.S. Capitol, in Washington, D.C., on October 24, 2023. (Graeme Sloan/Sipa USA).

Artificial intelligence (AI) technologies have been emerging for years now, but legislative action on AI regulation has been haphazard, piecemeal, and critically, often lacking altogether. To address this problem, the Biden administration has released an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence that outlines an ambitious response to concerns about bias and discrimination, labor displacement, and geopolitical competition.

The EO reads as a shotgun approach to nail down as many AI topics as possible. Indeed, the EO is a useful step, but components of it may be challenged in court. Moreover, some provisions are not mandatory. For example, federal agencies are not required to adopt the National Institute of Standards and Technology frameworks for safe AI. Legislative support is also needed.

The U.S. Congress has been holding a number of summits and hearings that recognize AI’s transformative impacts and the need to establish guardrails. A flurry of bills have already been proposed, but buy-in is mixed and legislation is not comprehensive. Crucially, many policy domains that involve AI are totally missing. For example, there is relatively little emphasis placed on foreign affairs throughout the current bills on the floor. The parade of bills tends to focus on surface-level legislation, usually not addressing the data center, hardware, or even far upstream commodity nodes in the “AI stack.”

Some of the lack of political action is, of course, a function of polarization. Yet even in the executive branch, the Biden administration has debated whether regulation enhancing safety comes at the detriment of stifling innovation. For would-be policymakers, AI regulation is a daunting task: development, implementation, and use of these technologies move lightning-quick. For someone not “in the know” of AI, it would appear impossibly enigmatic.

The “SETO Loop”

A clear, theoretical framework on how to systematically think about AI is needed to contextualize and spur legislation. In a recent publication on the topic, we argue that policymakers who are considering developing this regulation should follow what we call the “SETO Loop.”

  1. Scope. Address what precise, specific regulatory problem arises from the advent of what precise, specific AI-based technology. Is the object of protection the individual, the nation, a market, or humanity itself?
  2. Existing regulation across nodes. In what ways do existing regulations fail to address this problem? What nodes require regulation?
  3. Tool. What tool is best suited to ameliorate the problem? A total ban, taxes and punishments, blueprint manipulation, information revelation, or voluntary rules?
  4. Organization. Who should enact the regulations? How will these regulations have to adapt and at what speed?

First, the process of determining AI regulation should be problem-driven. Regulation is not costless, both because of the economic benefits of AI innovation and the geostrategic competition around AI. Policymakers should tap into expertise that can shed light on the risks — to whom, how severe, and how likely. They are currently taking this step through the series of “AI Insight” summits that are serving as a data collection exercise.

Second, policymakers should consider whether existing regulations cover these technologies. New technologies need not mean new laws. The 1976 Copyright Act, for example, has proven to be resilient in the face of new technologies; it’s just interpreted in new ways. The U.S. Copyright Office has consistently maintained that text-to-picture AI is not entitled to copyright protection because it is not the product of human authorship.

We note that while regulation falls on the cutting edge of the AI “stack” (or supply chain), there is usually hardly any mention of upstream regulation for AI products across many regulatory ordinances. For example, though advanced chips have already been in the Biden administration’s crosshairs, there are no oversight mechanisms to preclude further upstream commodities (e.g., gallium, an ingredient used in manufacturing semiconductor chips) from getting into the hands of malicious actors.

Organizational flexibility might be borne about by making use of “mixed” regulatory markets: those where certifying bodies engage in regulation, but a government entity manages the certifying regulators. Credit rating agencies, for example, work in this model — overseen by the U.S. Securities and Exchange Commission, they empower agencies such as Moody’s to rate debt obligations. Nimble as these might be, the same problem that critics stated plagued credit rating agencies during the 2007 financial crisis might come to bear again, where some shoddy debt obligations would be given inflated ratings.

Third, depending on the problem or risk, policymakers should consider the array of potential regulatory tools that might be appropriate. AI technologies are wide-ranging, from medical robotics to battlefield targeting to writing assistants. One regulatory tool will not fit all. Instead, the tools will almost certainly be different depending on the specific AI technology in question. All-out bans, like Italy tried to do with ChatGPT in spring 2023, make little sense with generative AI. These technologies are diffusing quickly and widely available, so bans are futile. Similarly, imposing fines, as the United States has done for violations of export controls, is also pointless for generative AI because the technology is not verging on illegal, nor would bans or fines be enforceable or practicable given the porosity of digital borders. We see an inkling of more creative, “mixed” regulatory schemes in the EO. “Testing and evaluations, including post-deployment performance monitoring, will help ensure that AI systems function as intended…” the EO states, yet it does not mention more creative, second-party certification markets.

Fourth, policymakers should, based on how they addressed the first three parameters, determine the appropriate organization within government. The United States in particular is federalized and decentralized, which can play to the strengths of AI regulation that will require different types of policies depending on the specific technology and its risks. For example, the Federal Trade Commission is taking an active role in issues of AI, content creation, and privacy. The Department of Defense would be more appropriate for questions of battlefield autonomy. In some cases, international or even interagency coordination might be appropriate for establishing and enforcing standards across countries. Some forms of regulation might be best achieved within existing organizations (e.g., standards workshops established within U.N.-affiliated organs), whereas others might require new organizations entirely (i.e., a global reporting framework for “know-your-customer” requirements for renting computing power to train frontier AI models). With the United Kingdom’s AI Safety Summit now concluded, it is clear that at least some nominal camaraderie exists between states for AI governance at the inter-state level — legislators would do well to know that the “O” in SETO may include international organizations and bilateral forums.

The way forward

As policymakers respond to the constant drumbeat of calls for AI regulation — including from the AI industry itself — they would be wise to consider a systematic approach. AI technologies are evolving quickly, meaning that hasty regulation today could result in outdated regulation tomorrow. They are also economically and strategically valuable, meaning that regulation that stifles innovation could put the United States at a relative disadvantage to China.

Regulation can be harmful, and not just because of the potential opportunity cost of foregone innovation; this is especially salient in the U.S. context where many AI innovators are based. Its hazards also entail potential coordination problems across different levels of government, as well as the cost of regulations that are not feasible or impossible to realize. A step-by-step strategy that starts by establishing the scope, then the existing regulation, then potential tools, and finally the appropriate organization, will add structure to the otherwise enormous and abstract task of framing the problem and trying to solve it for policymakers.