Sections

Commentary

New OMB memos signal continuity in federal AI policy

May 8, 2025


  • OMB released two memos providing guidance on the “efficient acquisition” of AI and “acceleration” of its use in the federal government.
  • Though President Trump revoked some Biden-era guidance on government use of AI, the memos largely reaffirm the same strategy that seeks to accelerate AI innovation and adoption while safeguarding public trust.
  • The administration’s forthcoming AI Action Plan might reflect these same priorities and indicate a more institutionalized framework around federal AI policy.
U.S. President Donald Trump holds a signed executive order on AI in the Oval Office of the White House, in Washington, U.S., on January 23, 2025.
U.S. President Donald Trump holds a signed executive order on AI in the Oval Office of the White House, in Washington, U.S., on January 23, 2025. REUTERS/Kevin Lamarque

The White House Office of Management and Budget (OMB) issued two memos on April 15, updating its policies on the use and acquisition of artificial intelligence (AI) in the federal government. The memos—M-25-21 and M-25-22—replace earlier guidance from the Biden administration and build on frameworks introduced by executive orders (EOs) under the first Trump administration. Together, the new memos reaffirm a federal strategy seeking to accelerate AI innovation and adoption while safeguarding public trust, an objective spanning the last three administrations. The memos preserve the role of Chief AI Officers (CAIOs), federal risk management frameworks, and procurement rules, possibly signaling continuity of such priorities in the administration’s forthcoming AI Action Plan. 

Innovation, governance, and public trust: Comparing guidance on AI use in the federal government 

While federal AI policy has shifted between administrations, its core elements have remained stable. Two executive orders from the first Trump administration set early expectations for trustworthy AI adoption, including risk-based principles and agency inventories. At the top of his second term, President Trump revoked several prior directives, including President Biden’s 2023 AI executive order that had established governmentwide standards for safe, secure, and trustworthy AI deployment. This repeal nullified Biden’s executive order itself but did not address the agency actions taken under the order. Subsequently, on his third day in office, President Trump signed EO 14179, which initiated a process to review and potentially revise or rescind these inherited policies. The two new OMB memos clarify the current guidance on federal use and acquisition of AI systems. 

The first, titled “Accelerating Federal Use of AI through Innovation, Governance, and Public Trust,” rescinds and replaces Biden’s memo on the same topic, providing updated guidance on how federal agencies should use AI technologies.  

The new memo provides similar guidance as the previous memo issued under the Biden administration, including directions to agencies to “accelerate the Federal use of AI by focusing on three key priorities: innovation, governance, and public trust.” It also calls on agencies to invest in existing AI models, reuse code, and encourage open-source development, parallel to the Biden-era memo’s directions for agencies to ensure access to open-source libraries and “enable sharing and reuse of AI models, code, and data.” 

Additionally, the new memo cements the CAIO position, which was established by President Biden. These officials are charged with ensuring compliance with AI policies and championing the agency’s AI strategy. The new memo requires every executive agency to name a CAIO within 60 days, if they have not already done so. As of July 2024, at least 57 federal agencies had appointed CAIOs as directed by the Biden administration’s memo. Its Trump-era counterpart also continues governance structures established by the OMB under Biden, including the Chief AI Officer Council chaired by the OMB, agency-level AI governance boards, and expanded AI use-case inventories. 

Defining ‘high-impact AI’ and streamlining risk management 

The new guidance folds the former Biden-era “rights-impacting” AI and “safety-impacting” sensitive use categories of AI into a single “high-impact AI” category, describing any AI application that “could have significant impacts when deployed,” particularly when the outputs serve as a basis for consequential decisions impacting individuals’ rights, opportunities, access to services, or safety. When an application meets that threshold, agencies must still subject it to pre-deployment testing, documented impact assessments, continuous performance monitoring, and meaningful opportunities for human intervention, in line with the requirements from the Biden-era memo. The OMB’s new guidance also retains the previous waiver process for cases where an agency believes that an AI application requires an exemption from certain risk controls but now explicitly requires public disclosure for any granted waivers. 

The new memo provides a non-exhaustive list of high-impact uses in an appendix, including AI used in critical infrastructure (e.g., medical devices or diagnostics), or in biometric identification systems. However, the new list of high-impact AI use cases omits some previously established use cases from the previous OMB memo focused on election infrastructure and replicating a person’s likeness without consent. 

While the previous compliance deadline was Dec. 1, 2024, agencies now have until April 15, 2026, to bring every high-impact system into compliance or shut it down.  

Updates on AI acquisition: Comparing the second set of memos  

The Trump administration’s memo, “Driving Efficient Acquisition of Artificial Intelligence in Government,” revises the procurement guidance for agencies on AI systems and services, replacing the 2024 memo titled, “Advancing the Responsible Acquisition of Artificial Intelligence in Government.” The new memo applies to any solicitations issued after Oct. 1, 2025, giving some lead time to adapt to the updated rules. It applies to most executive agencies, with small exceptions for certain defense and intelligence acquisitions. Agencies have roughly nine months to update their internal policies based on the new procurement guidance. 

Notably, the new memo continues most procurement principles from its Biden equivalent, such as promoting a competitive AI marketplace, making risk-informed purchasing decisions, and ensuring collaboration throughout the acquisition process.  

Unlike the Biden-era one, the new memo bars vendors from training commercial models on non-public government data without the agency’s explicit consent. This also includes intellectual property rights that ensure agencies can continue to use AI’s outputs and any models trained on government data. Additionally, the new OMB memo takes a stronger protectionist tone than the previous acquisition memo, urging agencies to “maximize” their use of AI made in the U.S. to “promote human flourishing, economic competitiveness, and national security.” 

To support implementation, the new memo allotted the General Services Administration (GSA) 200 days to develop an online repository of AI acquisition best practices and tools, including sample contract language, market research on AI products, and successful procurement approaches. The Biden-era OMB memo required CAIOs to report best practices within six months but only encouraged GSA to “explore” best practices without a firm timeline. 

Despite the small differences, the continuity between the memos is more striking than the change. Both memos encourage cross-agency acquisition collaboration, making risk-informed purchasing decisions, and facilitating a competitive AI marketplace while avoiding vendor lock-in. 

One area to watch will be actual implementation and enforcement of the memos. While OMB memos govern agency behavior, to bind private contractors, enforcement in procurement typically requires incorporating provisions into the Federal Acquisition Regulation (FAR) or agency-specific rules. These memos’ effectiveness will ultimately depend on agencies’ ability to embed their requirements into contract language, procurement processes, and oversight structures. 

A through line in federal AI strategy 

Taken together, OMB’s new memos underscore a consistent through line in U.S. federal AI policy for almost a decade. Signed in 2020, Trump’s EO 13960 articulated government principles for AI use that remain prominent today: promoting innovation while embedding trustworthiness. The Biden administration expanded those principles in its comprehensive 2023 AI executive order and the 2024 OMB guidance, which imposed the first binding governmentwide safeguards. Now, under new political leadership, OMB has chosen not to dismantle that architecture but instead to reinforce it. 

This continuity provides stability for federal agencies and contractors: It means agency investments made in AI governance capacity (such as hiring AI experts, setting up oversight processes, and cataloging AI systems) will remain relevant and worthwhile under successive administrations. Over time, these rules for the federal government may also indirectly affect the private sector as vendors rarely maintain separate standards for public and private sector clients. 

These OMB memos may also foreshadow the direction of the forthcoming AI Action Plan mandated by a new Trump executive order. That plan, which is under development by the White House Office of Science and Technology Policy (OSTP) in coordination with other senior White House officials, will outline a broader strategy “to enhance America’s position as an AI powerhouse.” 

The through line between different administrations may indicate a maturation of federal AI policy from high-level principles and one-off initiatives to a more institutionalized framework that persists across presidencies. The upcoming AI Action Plan is a chance to continue in that long-term direction. 

For the immediate future, agencies must translate guidance into practice: updating AI use-case inventories, finalizing risk management practices, and embedding new clauses in contracts. These quiet administrative steps work toward federal AI adoption and market standards for AI systems that build in accountability and oversight metrics. 

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).