California Governor Gavin Newsom vetoed S.B.-1047 on September 29, 2024.
Described as everything from “light-touch” to “authoritarian,” the pending California Senate Bill 1047 (SB-1047), the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, is sparking fierce debate. While some disagreements about the bill stem from genuinely different philosophies on technology regulation, others arise from potential misrepresentations of the bill’s text. These have influenced the bill’s final amendment, which passed the California legislature last month.
Now, all eyes are on Governor Gavin Newsom, who must sign or veto the bill by September 30. In this flashpoint for U.S. AI policy, it is crucial that the governor base his decision on the actual text of the bill–not its popular misrepresentations.
What does SB-1047 do?
SB-1047, introduced by State Senator Scott Wiener (D-San Francisco), aims to guard against potential catastrophic risks from future artificial intelligence (AI) systems, including mass casualties from a weapon of mass destruction and over $500 million in damages from a cyberattack on critical infrastructure. SB-1047 does so by requiring developers of “covered models”–AI models that were trained using large amounts of computing power costing over $100 million or altered by “fine-tuning” for over $10 million–to exercise reasonable care in creating and following a safety and security protocol (SSP). This includes cybersecurity measures to prevent model theft, specification of model testing procedures, and safeguards against dangerous capabilities. However, outside of these high-level specifications, developers design the details for their own SSPs, allowing for agile, technically informed compliance.
Given that the standard of reasonable care already applies to AI developers through existing tort law, SB-1047 largely clarifies what reasonable care means for frontier development, as opposed to creating new, potentially burdensome compliance standards. Covered model developers must also undergo annual third-party SSP compliance auditing, publicly post a redacted version of their SSP, and implement a “kill switch” for all covered model instances under their control.
The most significant amendment to the bill has made it so that California’s attorney general would now only be able to sue companies once critical harm is imminent or has already occurred, rather than for negligent pre-harm safety practices, as was previously the case. This change, which was requested by frontier lab Anthropic, significantly decreases the protective power of SB-1047. The bill creates a streamlined Board of Frontier Models composed of experts from across the industry which updates the covered model thresholds and issues technical guidance in place of the previously broader Frontier Model Division. Other provisions in the bill include whistleblower protections, a public AI-training cluster for startups and academics, and Know-Your-Customer requirements for exports of cloud computing resources for AI training.
One of the most important things to note about the bill is that only developers training models using very large and costly amounts of computing power have compliance responsibilities. As such, it targets only the largest developers while exempting small startups, small businesses, and academics.
Misrepresentation and amendment of SB-1047
Since its draft version, SB-1047 has undergone some changes and ultimate amendments. Much of the discourse that led to the final version of the bill was based around genuine disagreements about technology regulation, such as whether to regulate at the model development level or the application and use level, and around real problems with the bill–for example, in the May 16 version, a developer could alter another’s model indefinitely without transferring liability.
However, the discourse has also been filled with false claims, or misrepresentations, from opponents of the bill’s contents. Four of the most significant of these are about the confidence threshold for safety, the meaning of perjury, the scope of the “kill switch” requirement, and the scope of developers covered by the bill. SB-1047 has in turn been modified to accommodate opposition generated by some of these misconceptions.
The confidence threshold for safety
One common refrain has been that covered model developers would have to “prove,” “guarantee,” or “certify” that “every possible use” of their model would not cause catastrophic harm. Such a degree of confidence is indeed “virtually impossible,” as opposing Caltech researchers put it. After all, the capabilities of frontier large language models (LLMs) are quite difficult to fully elicit and industry best practices for doing this are still developing.
However, at the time the Caltech letter was released, the bill required “reasonable assurance,” which it explicitly stated, “does not mean full certainty or practical certainty.” This misrepresentation likely played a role in removing this standard from the bill. In the final amendment, “reasonable assurance” of unreasonable risk was replaced with “reasonable care” to prevent such risks.
Although reasonable assurance likely required a higher standard of care and confidence in safety practices, it is also not as well-established as the standard of reasonable care, which has hundreds of years of legal precedents. The net result is likely less rigor in safety and more certainty in the legal environment.
The meaning of perjury
According to some critics, not only would developers have to impossibly “prove” the safety of their models: If they made any mistakes in this proof, the fact that SSPs were submitted under penalty of perjury could send them to jail. At its most hyperbolic, this was characterized as giving the now-defunct Frontier Model Division “police powers” to “throw model developers in jail for the thoughtcrime of doing AI research.” Startup accelerator Y Combinator (YC), who has been particularly vocally-opposed to SB-1047 throughout its development, promoted the idea that “AI software developers could go to jail simply for failing to anticipate misuse of their software.”
However, this is not how perjury would likely work in court. To be convicted of perjury in California, a defendant must “willfully state[] that the information was true even though [they] knew it was false.” As Senator Wiener said in his response to YC and the even more vocally-opposed venture capital firm Andreessen Horowitz: “Good faith mistakes are not perjury. Harms that result from a model are not perjury. Incorrect predictions about a model’s performance are not perjury.”
Furthermore, perjury is rarely charged and even more rarely convicted, due to the difficulty in proving intent. Rather, the penalty of perjury is more often a way to emphasize a need for truthful testimony. Whether criminal liability was appropriate is a separate question: Indeed, some warned that, with SB-1047, perjury enforcement could be used adversarially by an ambitious prosecutor. At the same time, due to the magnitude of harms that the bill considers, intentionally lying about an SSP might warrant criminal liability, even if perjury were seldom enforced. Regardless, the penalty of perjury was replaced with California’s standard civil liability for lying in documents submitted to the government. Given the rarity of perjury convictions, this change might not make much difference.
The scope of the kill switch requirement
One provision of the bill that has particularly drawn the ire of supporters of open-source AI–models whose internal workings are publicly released for download and modification–requires covered model developers to implement the ability to enact a “full shutdown” of their model in an emergency. Renowned Stanford University AI expert Fei-Fei Li has claimed that this “kill switch” provision would in fact kill open-source AI, since developers cannot control these models once they are released.
Whether this specific claim was a “misrepresentation” of the bill is more ambiguous. On the one hand, some have said that it was, since it was made after the June 20 amendment, which stated that only covered model derivatives “controlled by a developer,” including unmodified copies, would require a kill switch. This amendment had already been made by the time Li came out against SB-1047. Furthermore, previous versions of the bill had unambiguously required developer control of the model, which did not stop clear misrepresentation of this point. On the other hand, this amendment also said that the “full shutdown” requirement applied to “a covered model” without qualification. This would seem to include any covered model, including those that have been “open-sourced” and thus put out of the developer’s control.
This tension–between “a covered model” and “all covered model derivatives,” which included unmodified copies–could have caused confusion. SB-1047’s final amendments added the qualification that “a covered model” must also be “controlled by a developer” to be subject to shutdown capability requirements.
To clarify, the bill does make open-source AI more difficult to the extent that a jury would find releasing models with dangerous capabilities without restriction to not exercise reasonable care. This is possible, since committed individuals can remove the safeguards from current frontier open-source models in 45 minutes for less than $2.50. However, the bill’s effect on open-source release is important only to the extent that similar standards of reasonable care do not already apply through existing tort law.
The scope of developers covered by the bill
Potentially the most widespread misrepresentation of SB-1047 is that it directly applies to small businesses, startups, and academics. However, as previously asserted, unless a developer uses enormous and costly amounts of computing power, the bill essentially does not apply to them. If these thresholds were to become problematic in the future, the Board of Frontier Models could raise them.
Nevertheless, Li said that “budding coders and entrepreneurs” would be harmed by having to “predict every possible use of their model.” The Chamber of Progress argued that “[the bill’s] requirements expose new model developers to severe penalties and enforcement actions while demanding substantial upfront investment.” In response to Senator Wiener’s letter, Andreessen Horowitz claimed that “the bill applies to the fine-tuning of models, regardless of training costs,” by misconstruing the bill’s definition of “developer.” These statements all demonstrate a misunderstanding of one of the bill’s key features: that it deliberately targets only the most well-resourced developers.
In fact, SB-1047 originally defined which models would be covered based only on how much computing power was used to create them. However, the definition was later changed to also consider the model’s price. This change was made because, as technology improves, powerful computing becomes cheaper, which would allow smaller companies to create advanced AI models using large amounts of computing power in the future. This change also facilitates both competition and reduces future burden on the government for verifying compliance.
SB-1047 in the spotlight
Amid this heated discursive environment, the governor must either sign or veto the bill by September 30. With this decision coming soon, Governor Newsom should be wary of the various ways SB-1047 has been misrepresented and instead make his decision based on the actual text of the bill and his assessment of its likely consequences.
Commentary
Misrepresentations of California’s AI safety bill
September 27, 2024