Sections

Commentary

The empty national AI policy framework: Who is in charge of those in charge?

March 31, 2026


  • The Trump administration recently unveiled a national policy framework for AI, but it misses a critical dynamic in AI governance around responsibility and accountability.
  • Effective oversight begins with recognizing the effects of AI—its harms and risks—and the causes of those effects—power and competition
  • Meaningful policy must structure power by focusing on four interlocking principles: accountability, access, agency, and action.
WASHINGTON, DC - JULY 23: A view of the shadow of U.S. President Donald Trump as he speaks during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. Trump signed executive orders related to his Artificial Intelligence Action Plan during the event.
WASHINGTON, DC - JULY 23: A view of the shadow of U.S. President Donald Trump as he speaks during the "Winning the AI Race" summit hosted by All‑In Podcast and Hill & Valley Forum at the Andrew W. Mellon Auditorium on July 23, 2025 in Washington, DC. Trump signed executive orders related to his Artificial Intelligence Action Plan during the event. (Photo by Chip Somodevilla/Getty Images)

On March 20, the Trump White House released its National Policy Framework for Artificial Intelligence. It is filled with worthy aspirations such as protecting children, promoting innovation, and ensuring American leadership.

But it sidesteps the most important question in AI governance: Who is in charge of those in charge?

Having sidelined Congress in seemingly everything else, the administration now delegates AI policy to the legislative branch, offering a series of “Congress should” bromides. At the same time, the policy is devoid of any meaningful discussion of the responsibility and accountability of those whose decisions created the very issues it seeks to address.

The plan mistakes symptoms for causes.

Much of the public debate about AI has focused on existential risks—doomsday scenarios in which machines escape human control. But the more immediate and tangible risk is not hypothetical. It is the concentration of AI decisionmaking in a handful of individuals responsible primarily to themselves and their shareholders. In the early digital era, as small startups grew to become Big Tech, firms like Google, Amazon, and Meta fought regulation in order to make their own rules for the new order.

Now Big Tech has become “Big AI.”

In this environment, the first step in developing policy is to recognize that AI is not a single unified subject. Its scale is too vast, its applications too diverse, and its consequences too far-reaching. Effective oversight thus begins with recognizing the effects of AI—its harms and risks—and the causes of those effects—power and competition. The Trump plan is a laundry list of the effects that ignores how they came to exist in the first place.   

Meaningful policy must begin by structuring power, not simply describing its consequences. This means focusing on four interlocking principles: accountability, access, agency, and action.

Accountability

Accountability means that those who control AI’s essential capabilities are answerable for the consequences of their decisions. When a handful of companies make consequential choices about model behavior, data use, and system deployment with little or no transparency and review, accountability is absent.

The exploitation of personal privacy, the amplification of misinformation, and the consolidation of market power by Big Tech were not accidents of technology. They were business decisions to prioritize externalizing the risks and costs of their decisions onto society while internalizing the savings to maximize profit. Many of the same firms are now positioned to dominate the AI era using similar strategies.

AI policy provides the opportunity to redress the absence of digital accountability. Power must be accompanied by accountability.

Access

Access determines who can participate.

AI is defined by controlled gateways. Frontier models are accessed through proprietary interfaces. Computing power is concentrated in a small number of cloud providers, many of which are also model developers. Middleware—the essential link between applications and models—is similarly concentrated. Control over these essential functions means control over the terms of participation—who can build, what they build, and at what cost.

The result is innovation by permission. Without meaningful access to interconnect on fair and nondiscriminatory terms, competition remains conditional, and innovation remains subject to the decisions of gatekeepers.

Agency

If access determines who can participate, agency determines who decides.

Today, that authority is concentrated in a small number of firms that control AI’s essential assets. Model developers determine the capabilities and constraints of their platform. Middleware controls the essential integration of applications with models through APIs, plug-ins, and other means. All of this, of course, happens using cloud services dominated by model and middleware owners. Because this handful of firms control the infrastructure, they control the choices available to everyone else.

The question of agency goes to the heart of whether the future of AI will be decided by a small number of corporate CEOs, or by competitive markets operating within rules set in the public interest.

Action

Action is what turns principles into reality.

It means moving beyond aspirations to enforceable obligations—rules that define acceptable conduct, mechanisms that ensure compliance, and remedies that address violations. Historically, transformative technologies from the railroad to atomic power have required institutions with authority and expertise to oversee their development and use. AI is no exception. Action is more than bromides about what should be done; it requires an expert federal agency to establish behavioral expectations rather than allow Big AI to make its own rules.

Without action, accountability is voluntary, access is conditional, and agency remains concentrated.

Toward a real policy framework

A national AI policy that worries about outcomes but ignores who controls them substitutes wishful thinking for meaningful governance. Power does not regulate itself. The policy question to be addressed is not whether AI will shape our future, but whether its rules will be written by those who wield its power or by democratic institutions answerable to the public. 

In this regard, the Trump framework comes up empty.

  • Acknowledgements and disclosures

    Google, Amazon, and Meta are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation. 

    Tom Wheeler is the former chairman of the Federal Communications Commission. Bill Baer is the former assistant attorney general for antitrust. Their book, “Power & Peril: Big Tech, Big AI, and the Fight for the Digital Future” will be published by Brookings Press. 

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).