Sections

Commentary

The Supreme Court’s major questions doctrine and AI regulation

September 6, 2023


  • Any effort by Congress or the White House to regulate AI may be struck down by the Supreme Court’s “major questions doctrine.”
  • The Court has used this doctrine to nullify administrative actions that do not point to “clear congressional authorization,” according to the Court, when making decisions of vast “economic and political significance.”
  • Lawmakers and regulators may be able to bypass this risk by using the government’s buying power to purchase AI services only from platforms that adhere to a set of voluntary standards.
The Court Chamber inside of the Supreme Court building in Washington, D.C.
The Court Chamber inside of the Supreme Court building in Washington, D.C. is seen on December 6, 2022. (Photo by Bryan Olin Dozier/NurPhoto)

There is reason for optimism about the federal government stepping up to create a policy framework for artificial intelligence (AI) that will keep us safe while enabling innovations that will improve all our lives. Congressional activity to date has featured serious questions and bipartisan approaches. The White House has negotiated voluntary commitments by major AI players, designed to protect the public, with the Federal Trade Commission (FTC) charged with enforcing the commitments.

But, beneath the surface, there is a shark in the water, ready to obstruct any congressional or administrative action.

That shark is the Supreme Court’s “major questions doctrine.”

The Court pronounced the doctrine in West Virginia v. EPA, a 2022 decision invalidating the Obama-era EPA’s “Clean Power Plan.” The plan would require existing power plants to shift how they generate electricity. The EPA argued that Section 111 of the Clean Air Act, which mandated the agency to identify and implement the “best system of emission reduction” from power plants, provided sufficient legal authority to adopt the plan.

The Court, however, said that the mandate was too broad and that “administrative agencies must be able to point to “clear congressional authorization” when they claim the power to make decisions of vast “economic and political significance.”

Justice Kagan, in dissent, noted the thin support for the doctrine, writing, “The majority claims it is just following precedent, but that is not so. The Court has never even used the term ‘major questions doctrine’ before.”

Our purpose is not to address the merits of the doctrine. Rather, we seek to analyze the implications of the doctrine to the fledging efforts to regulate AI.

Consider, for example, Senators Graham and Warren’s recently introduced legislation to create a Digital Consumer Protection Commission designed to, as they wrote in a New York Times op-ed, “prevent online harm, promote free speech and competition, guard Americans’ privacy and protect national security.”

Noting, “For more than a century, Congress has established regulatory agencies to preserve innovation while minimizing harm presented by emerging industries,” they justified creating such a commission on the grounds that “Congress is too slow, it lacks the tech expertise, and the army of Big Tech lobbyists can pick off individual efforts easier than shooting fish in a barrel.”

The proposed legislation, 158 pages long, mandates that the new agency undertake a wide spectrum of tasks, such as related to transparency, competition, privacy, and national security, among other topics. We generally support the legislation’s direction, as well as a similar proposal for a new agency introduced by Senators Bennet and Welch, as we think, just as with previous such commissions, the country needs an expert agency to balance the need to incent private investment and innovation with a need to protect the public.

But would the commission’s important actions under the law survive a major questions challenge?

The truth is we don’t know.

The more important truth is nobody knows.

No one knows because, in its decision, the Court offered vague standards for what constitutes a major question. The majority opinion, written by Chief Justice Roberts, said the doctrine should only be used in “extraordinary cases.” Yet, in the only 18 months since the decision, the Court majority used the doctrine to override agency actions related to housing, vaccinations, and student loans.

In our years of government service at the Federal Communications Commission (FCC), we often had the task of informing a party to a proceeding that, based on FCC precedent, they were likely to lose. Inevitably, they responded by countering that their case was extraordinary. We expect multitudes of litigants seeking to overturn agency actions to argue their case meets those criteria, with a clearer judicial standard arriving, if ever, after decades of litigation.

To make matters more difficult to decipher, Justice Gorsuch, in his concurring opinion, offers a different criterion, writing that the doctrine applies “when an agency claims the power to resolve a matter of great ‘political significance,’” that affects “a significant portion of the American economy,” or requires “billions of dollars in spending by private persons or entities.”

Certainly, that would apply to many actions affecting how AI develops and specifically to many decisions the new digital agency would be expected to make.

Justice Kagan’s dissent in the West Virginia case also points to what would be even more true of regulating new technology: “The majority’s decision rests on one claim alone: that generation shifting is just too new and too big a deal for Congress to have authorized it in Section 111’s general terms. But that is wrong. A key reason Congress makes broad delegations like Section 111 is so an agency can respond, appropriately and commensurately, to new and big problems. Congress knows what it doesn’t and can’t know when it drafts a statute; and Congress therefore gives an expert agency the power to address issues—even significant ones—as and when they arise. That is what Congress did in enacting Section 111. The majority today overrides that legislative choice.”

In short, the major questions doctrine would handcuff an expert agency explicitly tasked by Congress to address significant and evolving issues related to AI. In contrast to a history of expert regulatory agencies agile enough to reflect technology and marketplace changes, the Digital Commission would start in a judicial environment in which no one will be certain for years whether its interpretation of its own powers is likely to be upheld or will fall victim to a single judge, or panel of judges, who disagreed with the agency’s judgement.

Sadly, the same vulnerability would apply to any FTC efforts to enforce the voluntary commitments. While we would disagree with the jurisprudence, it is easy to see how a judge, relying on the Supreme Court’s loose standards, could view any FTC enforcement as an effort to address a major question that only Congress is entitled to do.

That is more than a legal problem. As we both know from our stints in governments, law and regulation have a huge impact on driving capital investments to some enterprises and away from others.

The impact of the major question uncertainty will be to freeze some investments, potentially depriving funding from those AI platforms willing to live up to higher standards.

So, what is to be done?

One path is to hope—and we use the term with heavy irony—that the major question doctrine is nothing more than a partisan excuse to strike down rules only supported by Democrats.

Given that the Court has only used the doctrine to strike down Biden administration policies and that President Trump was not exactly shy about using executive power, that is plausible.

And, as AI regulation has bipartisan support, perhaps the Court will find a way to distinguish between rules affecting the heavily Republican carbon industry and the more Democratic tech industry.

But hope, particularly hope for a judiciary that seems as divided as the country, is both problematic and unlikely to be an effective strategy.

Another path is to write legislation that explicitly addresses anything that could constitute a major question.

But it may be near impossible to pass such legislation.

We both were involved in the negotiations over and passage of the landmark 1996 Telecommunications Reform Act. Justice Scalia referred to it as a “model of ambiguity.” He wasn’t wrong. But those areas of ambiguity were essential to the political task of gaining the votes necessary for passage.

Senators Warren, Graham, Bennet, and Welch may wish to review their legislative language to tighten up some sections in anticipation of a major question issue, but the more important task is to get the votes necessary for passage so that there is an institution with the expertise—and agility—to address wherever evolving technology and the public interest intersect.

A third and underrated avenue is using the government’s buying power.

The government, including all jurisdictions, will be the most important sector buying AI services. In areas such as defense, healthcare, and education, it will be the dominant buyer.

If the federal, state, and local governments were to announce that they will only purchase AI services from platforms that adhere to the White House’s voluntary commitments, it would send a signal to markets to prioritize funding for those enterprises that meet those standards.

Further, the governments could convene all the major institutional buyers of AI to create a ratings agency capable of informing those buyers not just of whether the AI meets certain minimum standards but helping buyers determine which AI delivers the best performance while also protecting users and the public.

There is a long history of ratings that enable users, buyers, and distributors of products to make more informed decisions, also having the salutary effect of providing financial incentives to those who provide more responsible products and shaping how the supply side develops products. Over time, such ratings can be an evolving tool to continually improve the AI ecosystem.

Anything is possible, but it would be difficult for the Court to interfere with the government’s purchasing and convening power.

Still, the best path forward is for the Court to find an opportunity to limit, or even reverse, its major questions doctrine so that Congress may, as it did with other emerging technologies, set goals and delegate the analysis, rulemaking, and enforcement to an expert agency.

It is probably not an accident that the Court used the major questions doctrine to help an industry with its roots in the 19th Century—coal—limp along to survive some more years than it would have otherwise into the 21st.

But as an unforeseen consequence, it might distort the most important technological platform of this century to serve only private goals, instead of balancing the public and private goals that a bipartisan consensus of Congress seems to share.

As Justice Kagan noted in closing her dissent, “The subject matter of the regulation here makes the Court’s intervention all the more troubling. Whatever else this Court may know about, it does not have a clue about how to address climate change. And let’s say the obvious: The stakes here are high. Yet the Court today prevents congressionally authorized agency action to curb power plants’ carbon dioxide emissions. The Court appoints itself—instead of Congress or the expert agency—the decisionmaker on climate policy. I cannot think of many things more frightening.”

The prospect that the Court would appoint itself the decisionmaker on AI is equally frightening, resembling a shark in the water that policymakers cannot ignore.