The following is a summary of the 35th session of the Congressional Study Group on Foreign Relations and National Security, a program for congressional staff focused on critically engaging the legal and policy factors that define the role that Congress plays in various aspects of U.S. foreign relations and national security policy.
On July 12, 2024, the Congressional Study Group on Foreign Relations and National Security convened virtually to discuss possible constitutional limits on and barriers to the regulation of artificial intelligence (AI). Concerns over the rapid development of AI technology has led policymakers at all levels to consider an array of possible regulatory approaches. While Congress debates a possible federal approach, several states had begun to step into the void with their own legislation. The leading example is California’s S.B. 1047, which would, among other measures, require that all AI developers of a particular scale “provide reasonable assurance” under oath that their models are unable to cause $500 million in damage to critical infrastructure within the state or lead to a mass-casualty event. But observers have questioned whether such requirements are consistent with the First Amendment and other possible constitutional constraints.
The study group was joined by two outside experts for this session:
- Alan Rozenshtein, associate professor at the University of Minnesota Law School; and
- Jess Miers, senior counsel at Chamber of Progress.
Prior to the discussion, the study group circulated the following background materials:
- Dean Ball & Alan Rozenshtein, “Congress Should Preempt State AI Safety Legislation,” Lawfare (June 17, 2024);
- Alan Rozenshtein et al., “AI Safety Laws Are Not (Necessarily) a First Amendment Problem,” Lawfare (June 7, 2024); and
- Jess Miers, “California’s SB 1047 Could Stop AI Startups Before They Even Start,” Medium (May 9, 2024).
Miers kicked off the discussion, explaining that conversations surrounding AI regulation should aim to be more incisive regarding: (1) the type of technology being targeted; and (2) whether a potential rule should regulate the design of AI tool itself or whether it constrains the people using the AI tool.
Rozenshtein then provided a snapshot of the current lay of the land in terms of regulation in the U.S., noting that state regulations are far more substantive than federal initiatives thus far. On the federal level, there is a Roadmap for Artificial Intelligence Policy released by a number of senators, which is less about regulating AI as much as it is about encouraging its development in the U.S. and ensuring that the federal government remains involved. The Biden administration has also issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence to promote transparency and reporting requirements. Still, there has not yet been a viable proposal for federal congressional regulation.
Meanwhile, on the state level, there are three buckets in which regulations related to AI generally fall:
- Preexisting regulations that may be used for issues outside of AI, but also apply to harms caused by certain uses of generative AI. For instance, AI tools may generate content that is defamatory, and preexisting defamation laws will still apply.
- AI-specific regulations with respect to how AI is used by the end user. For example, Minnesota has regulated the use of deepfakes and required transparency by mandating disclosure that certain content is AI generated. This bucket does not raise novel constitutional problems, as it implicates behavior that is already regulated.
- AI-specific regulations targeting generative AI models themselves. The California AI Safety legislation is a notable example, as it would empower attorneys general to go after AI companies themselves if their tools are linked to any form of harm or casualty. A host of constitutional issues may be raised within this bucket.
The discussion then turned to how the First Amendment may apply in the AI context. Miers stated that this remains a grey area, and Rozenshtein added that it is important to note that this question remains undecided. Courts will likely be delayed in their response, and the answers they will provide are likely to be unclear. Miers explained that, from the outset, the First Amendment applies to humans and not to technology itself. There are two avenues through which the First Amendment may be implicated:
- There are multiple points at which different parties are involved in the creation of a generative AI service: developers, data set creators, people making decisions regarding how to train the technology. For example, each layer of a generative AI stack may have editorial qualities directly traceable to a human being, which then may be considered under the First Amendment.
- Users also have rights and private considerations, as generative AI tools may be used as a form of expression to enhance one’s speech. An apt analog is the use of current technology for protected forms of expression, such as a digital camera, newspaper, radio.
Miers explained that, ultimately, there must be an “editorial calculus” that is undertaken in deciding how much human expression is devoted to the backend of a given AI. Rozenshtein elaborated that, in Net Choice, the Supreme Court’s opinions demonstrate a sense that the justices do not view algorithms as void of any First Amendment value. Going back to 1996, the Ninth Circuit in Bernstein held that source code is protected speech. According to Rozenshtein, this reasoning holds water: Musicians communicate through their music, while mathematicians and computer scientists communicate through code. However, the takeaway is not that source code is always speech. Rather, the best interpretation is that that code is sometimes speech. Miers highlighted the challenging nature of this topic—there will always be a point at which, for emerging technology, one can trace a given output to some form of human decision-making. Thus, it will be very difficult to draw the line: At which point is algorithmic development no longer linked to human expression or decision-making?
As the discussion shifted to copyright, Miers stated that typically AI outputs are not eligible for copyright. One may argue that these outputs constitute human expression or point to an artist who used AI tools in combination with human work. But ultimately, if a work is produced by an AI tool, it will not receive copyright protection. Upcoming cases will answer whether it is fair to train on copyrighted works for the purpose of achieving a new output. Miers explained that if the training is used to produce outputs that are similar to the copyrighted training material, a judge is less likely to apply fair use. However, if there is a model whose purpose is not to replicate or impersonate the copyrighted input, then this is more likely to be allowed.
In addressing how Congress might think about copyright issues, Miers elaborated that there are two potential approaches. The extreme approach would be for Congress to amend the U.S. Copyright Act to apply to generative AI. But this may make copyright law “brittle” given that generative AI is nascent and ever-changing. Meanwhile, the more subtle approach would be to act on publicity rights on the state level. Rozenshtein explained that state regulations tend to have a large effect, pointing to the emissions regulations adopted in California that have caused companies to adapt their offerings globally in order to comply with California regulations. However, Congress has also used the Dormant Commerce Clause to restrict states from passing state-specific regulations that may have the effect of burdening out of state commerce. Thus, even if states pass AI-specific regulations, they may be struck down under the Dormant Commerce Clause.
Regarding potential preemption questions, Rozenshtein advanced that Congress should be able to preempt state legislation regarding regulations targeting the developing of AI models, but that regulations regarding the use of these models should remain under a state’s control. There is nothing state-specific about the development of a model, but a state should be allowed to forbid certain uses, such as self-driving cars, according to Rozenshtein. On the other hand, Miers explained that AI should be regulated like the internet, and thus be subject to express preemption. As states scramble to pass AI regulation, there will be more conflicts, which will cause confusion and expensive disputes. Miers advanced that we should learn from the internet regulatory space and adopt a more uniform approach.
Finally, regarding whether Section 230 should apply to AI, Rozenshtein explained that it is unlikely to apply to generative AI output, and that it is more likely that platforms will be subjected to general tort law, which will raise a host of interesting questions surrounding whether developers acted reasonably. Meanwhile, Miers questioned whether it would be fair to hold a generative AI platform liable for a user’s attempt to circumvent its guardrails, raising concerns for smaller developers. Although large platforms like Google can afford to fight off every lawsuit, the broad ecosystem of individual developers cannot afford to fight every single potential tort. Miers concluded that if Section 230 does not apply to AI, smaller companies are less likely to succeed.
The study group then concluded with an open discussion session, during which attendees were free to comment on and pose questions regarding the various issues raised.
Visit the Congressional Study Group on Foreign Relations and National Security landing page to access notes and information on other sessions.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).