Sections

Commentary

5 points of bipartisan agreement on how to regulate AI

Sorelle Friedler and Andrew D. Selbst
Andrew D. Selbst Professor of Law - UCLA School of Law

August 15, 2025


  • Amid marked shifts around AI between the Biden and Trump administrations, bipartisan continuity is reflected in their attempts at internal regulation of AI use and procurement by the federal government.
  • There are five main points of agreement between the first Trump administration, Biden administration, and current Trump administration, including regulation on high-impact AI systems, reporting structures, and transparency around the government’s use cases.
  • Memos from both administrations also reflect points of disagreement in two areas: the need for equity protections and notices for individuals impacted by AI.  
U.S. President Joe Biden meets with President-elect Donald Trump in the Oval Office at the White House in Washington, U.S., on November 13, 2024.
U.S. President Joe Biden meets with President-elect Donald Trump in the Oval Office at the White House in Washington, U.S., on November 13, 2024. REUTERS/Kevin Lamarque

The Trump administration revealed its AI Action Plan in July with a focus on “global AI dominance” in artificial intelligence (AI). Like so much else in the second Trump administration, AI policy has become an ideologically polarized battleground. This marks a shift from prior administrations, including the first Trump administration, when pairing innovation goals with concerns about AI harm was broadly accepted as a baseline, even if views differed on how to address it. Yet amid these ideological shifts, one unheralded area of bipartisan continuity—from Trump I to Biden to Trump II—is the recognition of the need to internally regulate the federal government’s use and procurement of AI. This limited bipartisan continuity can help inform us where common ground can still be found on AI regulation.  

Beginning in the waning days of the first Trump administration, the federal government has seen the need for limitations on government use of AI. In an executive order, the Trump administration issued the first set of government AI principles. The Biden administration’s White House Office of Science and Technology Policy (OSTP) followed this by creating its AI Bill of Rights, which sought to make “automated systems work for the American people.” This led to the first of two binding memoranda by the Office of Management and Budget (OMB): M-24-10 (referred to here as the “Biden AI memo”). In April 2025, the Biden AI memo was replaced by the Trump administration’s own memo, M-25-21. While the new AI Action Plan describes the Trump AI memo as “advanc[ing] AI adoption in government by reducing onerous rules imposed by the Biden Administration,” there are many points of agreement between the memos.  

In a forthcoming article, we detail the genesis of the OMB AI memos, the substantive requirements of these memos, points of bipartisan agreement, and the few points of disagreement. We overview these points here. 

5 points of bipartisan agreement 

There are five main points of bipartisan agreement between the administrations’ memos. Taken together, these form a useful baseline for future bipartisan AI regulation and have already been enacted within the executive branch.

Regulate and presumptively identify specific high-impact AI systems 

The OMB memos cover “high-impact” AI systems, including systems related to employment, health and safety, critical infrastructure, or other systems related to rights or safety. Specific types of systems are presumptively covered, including criminal risk assessments, assessment of requests for federal services, health care-related AI, and others. These definitions indicate agreement on the importance of identifying specific high-impact systems, based on impacts to rights and safety, as needing instituted guardrails.

Create systematic governance and reporting structures within agencies

To oversee the instituted requirements, both OMB memos institute reporting structures within the agencies, including Chief AI Officers (CAIOs) and an interagency council convened to discuss cooperation and consistent implementation of the policies across agencies. These structures provide support to agencies and clear lines of responsibility coupled with enforcement of the memoranda requirements via OMB oversight.

Prohibit use of high-impact AI systems without proactive protections

The memos institute a requirement that the federal government cannot use covered AI systems unless and until specified minimum practices have been put in place. While extensions and waiver processes are provided, this requirement sets a clear standard of expectations for the federal government and importantly ensures that these protections are in place before high-impact AI systems can be used.

Require specific minimum practices for high-impact AI systems

The two memos show continuity not only on the need for required minimum practices for high-impact cases, but on the specific practices themselves. Both memos identify testing, public consultation, mitigation of unlawful discrimination, impact assessments, independent review, ongoing monitoring, human oversight, human review and appeal, and other practices as minimum requirements. The practices are described in sufficient generality that they can be appropriately implemented across a wide variety of AI use cases and types, while providing enough specificity that OMB and the public can verify that these standards have been met. These minimum practices, coupled with the requirement that covered AI cannot be used until these practices are in place, stand to provide important protections to the public when their rights or safety are impacted by AI use.  

Provide transparency into AI use cases and associated protections

Both memos require a publicly available inventory of federal government AI use cases. For reporting year 2024, over 1,700 AI use cases were identified, with 227 indicated as high impact. The memos do not define specific procedures regarding reporting, so it is not yet clear what the Trump administration will change; the most recent federal inventory included transparency into both the use cases and their key risks and associated safeguards. This transparency mechanism is a key component of public accountability.  

2 points of disagreement 

When the Trump administration revised the Biden AI memo, there were some important substantive changes. These main points of disagreement were on equity and individual recourse. 

Whether equity protections should go beyond protection from unlawful discrimination

The Biden AI memo included provisions to support equity and proactively mitigate algorithmic discrimination, for example by requiring agencies to “[m]itigate disparities that lead to, or perpetuate, unlawful discrimination or harmful bias, or that decrease equity as a result of the government’s use of the AI.” The Trump AI memo limits its requirements to mitigating specifically discrimination recognized as unlawful elsewhere in the law—while separately issuing an executive order instructing the federal government to ignore disparate impact doctrine as a form of unlawful discrimination. Since disparate impact is a key theory in the context of AI, these instructions could have importantly different impacts on the discriminatory impacts of AI. With the AI Action Plan, the Trump administration released an executive order on “Preventing Woke AI” that aims to prohibit federal government procurement and use of large language models (LLMs) that incorporate steps to support diversity, equity, and inclusion, or “DEI,” in generated responses. 

Whether individuals should receive notice when AI impacts them

The Trump AI memo turns away from the Biden-era rights-based framework and follows more of a risk-based AI regulation model, reflecting global trends in AI regulation. Specifically, while the Biden AI memo required that individuals who were subject to high-impact AI systems receive notice, the Trump AI memo removed these requirements. Giving a notice to someone that they have been impacted by an AI system is an important prerequisite for that person to advocate for their rights, so its removal suggests a turn away from individualized rights protections. This may leave room for harms, such as the removal of government services based on AI-driven fraud determinations, to occur without individuals having the knowledge to effectively contest such decisions. 

Looking forward 

Administrations of both parties have now agreed the use of AI should include baseline protections against AI harm in high-impact settings. Such baseline requirements, at least in the federal government, include clear lines of responsibility, public transparency, and required risk reduction practices. Still, we do not wish to minimize the Trump administration’s harmful position on discrimination that is clearly contrary to existing law. Moreover, given the Trump administration’s deregulatory preferences and frequent disregard for written legal requirements, it remains to be seen how the administration will interpret and implement even these requirements that they drafted. While partisan debates over the proper focus and scope of AI regulation will continue, these areas of agreement can form a basis for state and federal legislators to build upon. 

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).