In only “a few thousand days,” artificial superintelligence (ASI) may be unleashed on humanity, according to artificial intelligence (AI) pioneer and OpenAI CEO Sam Altman. ASI, a capability exponentially beyond that of today’s AI, is software that gives machines “intellectual powers beyond those of humans.”
“A few thousand days” is the blink of an eye in technology time. It has been about two thousand days since Altman’s company introduced GPT-1, its first large language model, on June 11, 2018. Since then, the exponential increase in the ability of machines to “think” has exponentially decreased the time humans have to deal with the accompanying changes.
Into that closing window of time step the agencies of the federal government. These agencies are the first line of defense to protect national security and the public interest. Unfortunately, that first line has been severely weakened, if not gutted, by recent decisions by the Supreme Court of the United States.
First-line defenders
The agencies of government are the first line of defense because they are already in place with sectoral expertise. However, recent decisions by the Court raise the question of whether the judicial branch would sustain such agencies’ authority to act. The Court’s opinions appear to answer that question with “probably not.”
Congress created federal agencies and equipped them with the relevant expertise. Typically, such authorizing legislation includes broad directions to, for instance, protect the public interest or national security. It is often left to the agencies to determine the specific implementation of congressional intent—an agility that is especially essential amidst the onslaught of new technology.
The Department of Defense (DOD) clearly has a primary national security obligation—including the responsibility to evolve its activities to both use and defend against AI. Protecting national security is also the mandate of the Department of Homeland Security (DHS), Department of Energy (DOE), and Federal Aviation Administration (FAA), among others. Under the Court’s recent decisions, it appears that the ability of these other agencies to meet their national security obligations through regulation has been constrained.
A similar reality exists for other federal agencies given statutory authority to protect the “public interest.” These agencies range from the Federal Communications Commission (FCC) to the Environmental Protection Agency (EPA) and the Federal Trade Commission (FTC). The Court’s decisions constrain these and other agencies’ abilities to act on that broad mandate by establishing regulatory guardrails to protect the public interest from the effects of AI.
Supreme Court decisions
Cutting back on the regulatory capability of government has long been a staple of conservative dogma. Turning this wish into reality with a conservative Supreme Court unfortunately coincides with the arrival of AI. Two recent Supreme Court decisions have severely cut back the ability of agencies that are the first line of defense to deal with the changes AI is throwing at society. One decision dealt with clean air regulation, while the other dealt with fishing. Both decisions are far removed from the onward march of machine intelligence, but nonetheless represent a threat to humanity’s ability to respond to machines that “think.”
A June 30, 2022 decision, West Virginia v. EPA, codified a concept that had been debated in conservative legal circles for years: the so-called Major Questions Doctrine. The doctrine covers “major questions,” which are issues of vast economic and political importance. Of specific significance, the Supreme Court found that federal agencies had authority to address only those issues that were specifically identified in statute.
The case was unusual because the EPA’s rule establishing a Clean Power Plan for the conversion to lower-emission energy never went into effect after first being stayed by the court and then repealed by the Trump EPA. That the Supreme Court preemptively weighed in on the authority behind a regulation that did not exist seemed to indicate a pressing desire to constrain the authority of executive branch and administrative agencies. No longer would an agency be allowed to rely on what the Court called a “plausible textual basis” (in this case, the language of the Clean Air Act); instead, regulation would have to “point to a clear congressional authorization.”
Almost exactly two years later, on June 28, 2024, the Supreme Court overturned a 40-year-old precedent that gave expert agencies the benefit of the doubt in exercising their responsibilities. The 1984 Supreme Court decision, Chevron USA v. Natural Resources Defense Council, Inc., established the so-called Chevron Deference Doctrine, which said that where statutes are vague, the courts should look to the interpretation of the expert agencies created by Congress. The new decision, Loper Bright Enterprises v. Raimondo, was based on a National Maritime Fisheries Service regulation that herring boat owners should pay for the inspectors collecting data on their catch.
In the herring case, both the District Court and the Court of Appeals relied on Chevron to declare the action appropriate. The Loper Bright decision, written by Chief Justice Roberts, reversed those decisions, opining that “[c]ourts must exercise their independent judgment in deciding whether an agency has acted within its statutory authority.” On its face, this might seem rather benign, but its effect is to substitute the judgment of non-expert judges for that of expert agencies created by Congress. While the opinion speaks of “courts,” in practice, for major cases, this means the nine members of the Supreme Court. “In one fell swoop,” Justice Elena Kagan wrote in dissent, the Supreme Court “gives itself exclusive power over every open issue—no matter how expertise-driven or policy-laden.”
The relevance of these decisions to AI is best reflected in Justice Kagan’s dissent in West Virginia v EPA. “Whatever else this Court may know about, it does not have a clue about how to address climate change,” the Justice wrote. “The Court appoints itself—instead of the Congress or the expert agency—the decision-maker on climate policy. I cannot think of many things more frightening.”
Something “more frightening”
What is even “more frightening” is the Court establishing itself as the expert on AI. “We are a court—we really don’t know about these things,” Justice Kagan observed during oral argument on a separate internet-related case, “We are not, like, the nine greatest experts on the internet.”
Beyond the Court’s lack of expertise, by crippling the agencies with such expertise, the Court has passed front-line AI decision-making to the AI companies themselves. Old-fashioned industrial-era regulation may not be the answer for the era of intelligent machines—but neither is turning public interest and national security decisions over to those who can code and their investors.
We want AI innovators to continue to push the boundaries of what they are developing. Their actions hold the promise to solve pressing problems, improve lives, and spur economic activity. At the same time, these decisions also have the potential to do the opposite, harming the public interest and national security.
AI is not our nation’s first experience with digital companies making the rules in the absence of public oversight. We have been here before, when self-interested executives made unilateral decisions that affected the rest of us. As one group of AI observers wrote, “Social media was the first contact between A.I. and humanity, and humanity lost.”
Congress should act, of course. When President Biden signed the October 2023 executive order on AI, he reflected, “we still need Congress to act.” Expecting such complex action from a Congress that cannot pass a federal budget, however, seems problematic.
Responsible regulators use the agility of their authority to look forward and assess how the instructions from Congress relate to current realities. The agencies, after all, were created to exercise the kind of focused sectoral expertise that neither the Congress nor judges possess. Relying on that expertise and agility has served the nation well for decades.
A time of exponential technological change, in which machines are threatening to surpass human intelligence, is not the moment to pull back on the American public’s first line of defense.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
As technology races forward, the ability to deal with it races in reverse
October 16, 2024