The problems with a moratorium on training large AI systems

20 January 2021, Baden-Wuerttemberg, Tübingen: A researcher works on a code in the AI Research Buildung at the University of Tübingen, which is part of the "Cyber Valley". This is where the "Machine Learning" Cluster of Excellence of the Department of Computer Science of the Faculty of Mathematics and Natural Sciences is located. (to dpa "Artificial intelligence is booming and the southwest is setting the pace") Photo: Sebastian Gollnow/dpa

In late March, the Future of Life Institute released an open letter (and a related FAQ) calling “on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” The letter, which also stated that “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” was initially signed by over a thousand people, including many notable technology leaders. Many thousands more added their signatures after its publication.

Individual companies and universities have a right to decide whether, and at what pace, they will do work on artificial intelligence (AI). But a government moratorium in the United States on training powerful AI systems would raise a host of concerns, including the following:

Delaying the Benefits of AI

It’s already clear that AI will bring benefits to drug development, medical diagnosis, climate modeling and weather forecasting, education, and many other areas. In addition, as is often the case with emerging technologies, large AI systems will produce benefits that we aren’t able to foresee in advance.

A nationwide, government-imposed cessation of work on a key category of AI would have the inevitable result of delaying access to the technology’s benefits. For some applications, such as the use of large language models to improve education and broaden access to legal services, those delays would have problematic consequences.

Legally Dubious

There is no U.S. federal or state government entity that has clear legal authority to issue a moratorium on the training of large AI systems. For instance, the Federal Trade Commission’s (FTC) mission is “protecting the public from deceptive or unfair business practices and from unfair methods of competition.” Ironically, an FTC moratorium would impede companies from competing to develop better AI systems, pushing them instead to act in lockstep to stop (and then later restart) their work on training large AI models. And, while Congress has broad legislative authority under the Commerce Clause, that authority also has limits.

There would also be implications in relation to the First Amendment, which protects the receipt of information, including digital information obtained over the internet. Of course, as several lawsuits recently filed against companies that make AI image generators underscore, there are complex unresolved copyright law questions when AI models are trained using third-party data. But, to the extent that a company is able to build a large dataset in a manner that avoids any copyright law or contract violations, there is a good (though untested) argument that the First Amendment confers a right to use that data to train a large AI model.

In short, a moratorium—whether it came from a government agency or from Congress—would immediately be challenged in court.

Difficult to Effectively Enforce

A moratorium would be difficult to effectively enforce. The U.S. government is clearly not going to start engaging in prohibition-era-style raids on companies suspected of performing unauthorized AI training. More generally, the government does not have the human or technical resources to affirmatively verify compliance with a nationwide moratorium. Instead, a moratorium would likely be implemented through a self-reporting process, requiring companies and universities to certify that they are not engaging in prohibited AI work. There would no easy way to generate the list of companies and universities subject to this certification requirement.

Another problem with enforcement is that, unless a whistleblower comes forward, moratorium-violating behavior would be nearly impossible to detect. AI is very different from a domain like nuclear weapons development, where compliance with moratoriums is feasible (though not always easy) to track because the associated materials and technologies, such as uranium and nuclear centrifuges, are hard to come by, difficult to work with, and have a very limited set of uses. With AI systems the key ingredients are data and computing power, both of which are readily accessible and have an essentially limitless list of non-moratorium-violating uses.

Line-drawing Problems

Yet another concern would lie in defining what AI-related work is prohibited. What would be the size threshold for AI systems subject to the moratorium? What metric or set of metrics would be deemed sufficient to characterize the size of an AI system? Who would do the measuring? Could regulatory language imposing a size-specific AI system moratorium be written without creating loopholes allowing it to be easily circumvented? And would the moratorium only apply to the actual training of large AI systems, or also to the development of related technologies—some of which might make it possible to build powerful AI with smaller systems or less training than before?

The “What Next?” Problem

A six-month moratorium would also quickly lead to a lack of consensus on what to do next. As the expiration date grew closer, some people would argue that the moratorium should be extended for another six months or longer. Others would argue that it should be lifted completely. Still others would argue for a new, different framework, perhaps based on revising the rules on what specific activities were prohibited. These uncertainties would make it very difficult for companies to make decisions regarding hiring, research and development investments, and AI-related product planning.

Geopolitical Implications

An obvious consequence that is nonetheless worth noting is that a moratorium in the U.S. on training the largest AI models would have no force internationally. Governments and companies in other countries would continue to invest in building large AI systems. The advances, know-how, and job creation arising from that work would put the U.S. at a disadvantage in AI technology.

In sum, the upshot is that AI holds extraordinary promise, while also creating a new set of risks. Regardless of what policies the U.S. adopts, the technology of large AI systems is going to continue to advance at a global level. It is far better for the U.S. to remain at the forefront of AI—advancing the state of the art, and using that knowledge to better identify and mitigate risks—than for the U.S. government to attempt to impose a legally dubious, unenforceable, and easily circumvented nationwide halt on work on training large AI systems.