The Biden administration’s approach to the governance of artificial intelligence (AI) began with the Blueprint for an AI Bill of Rights, released in October 2022. This framework highlighted five key principles to guide responsible AI development, including protections against algorithmic bias, privacy considerations, and the right to human oversight.
These early efforts set the tone for more extensive action, leading to the release of the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, or the White House EO on AI, on October 30, 2023. This EO marked a critical step in defining AI regulation and accountability across multiple sectors, emphasizing a “whole-of-government” approach to address both opportunities and risks associated with AI. Last week, it reached its one-year anniversary.
The 2023 Executive Order on Artificial Intelligence represents one of the U.S. government’s most comprehensive efforts to secure the development and application of AI technology. This EO set ambitious goals aimed at establishing the U.S. as a leader in safe, ethical, and responsible AI use. Specifically, the EO directed federal agencies to address several core areas: managing dual-use AI models, implementing rigorous testing protocols for high-risk AI systems, enforcing accountability measures, safeguarding civil rights, and promoting transparency across the AI lifecycle. These initiatives are designed to mitigate potential security risks and uphold democratic values while fostering public trust in the rapidly advancing field of AI.
To recognize the one-year anniversary of the EO, the White House released a scorecard of achievements, pointing to the elevated work of various federal agencies, the voluntary agreements made with industry stakeholders, and the persistent efforts made to ensure that AI benefits the global talent market, accrues environmental benefits, and protects—not scrutinizes or dislocates—American workers.
One example is the work of the U.S. AI Safety Institute (AISI), housed in the National Institute of Standards and Technology (NIST), which has spearheaded pre-deployment testing of advanced AI models, working alongside private developers to strengthen AI safety science. The AISI has also signed agreements with leading AI companies to conduct red-team testing to identify and mitigate risks, especially for general-purpose models with potential national security implications.
In addition, NIST released Version 1.0 of its AI Risk Management Framework, which provides comprehensive guidelines for identifying, assessing, and mitigating risks across generative AI and dual-use models. This framework emphasizes core principles like safety, transparency, and accountability, establishing foundational practices for AI systems’ development and deployment. And just last week, the federal government released the first-ever National Security Memorandum on Artificial Intelligence, which will serve as the foundation for the U.S.’s safety and security efforts when it comes to AI.
The White House EO on AI marks an essential step in shaping the future of U.S. AI policy, but its path forward remains uncertain with the pending presidential election. Since much of the work is being done by and within federal agencies, its tenets may outlive any possible repeal of the EO itself, ensuring the U.S. stays relevant in the development of guidance that balances the promotion of innovation with safety, particularly in national security. However, the EO’s long-term impact will depend on the willingness of policymakers to adapt to AI’s rapid development, while maintaining a framework that supports both innovation and public trust. Regardless of who leads the next administration, navigating these challenges will be central to cementing the U.S.’s role in the AI landscape on the global stage.
In 2023, Brookings scholars weighed in following the adoption of the White House EO. Here’s what they have to say today around the one-year anniversary.
Treasury's AI Executive Order response is the gold standard
The Treasury Department went above and beyond the White House Executive Order’s requirements, which were more modestly confined to asking the agency to write a report about AI and cybersecurity risks. As Treasury wrote in its report, artificial intelligence (AI) is both a source of risk for malfeasance and a source of opportunity for “significantly improv[ing] the quality and cost efficiencies of [financial institutions’] cybersecurity and anti-fraud management functions.”
Moving beyond cybersecurity, Treasury’s Financial Stability and Oversight Committee (FSOC) brought together all financial regulators for a two-day symposium jointly held with Brookings on AI and financial stability. In giving the keynote at the event, Secretary Yellen unveiled a new request for information (RFI) on uses, opportunities, and risks of AI in the financial services sector. This RFI will better inform Treasury, FSOC, and all financial regulators about the ideas and concerns held by financial institutions, academics, technology experts, and the rest of the public.
While independent financial regulators were excluded from the White House EO—a common practice for executive orders—FSOC’s engagement in this space serves a critical role in convening and informing financial regulators to focus their engagement. As FSOC Deputy Assistant Secretary Sandra Lee and I wrote: “Thoughtful and open dialogue among leaders from multiple vantage points can help policymakers, industry leaders, and academics better harness the potential of AI while guarding against its risks.” While it remains to be seen what specific actions Treasury and the financial regulators will take, Treasury’s engagement and follow-through has exceeded their mandate.
The mother of all AI legislation
President Biden’s 2023 Executive Order on Artificial Intelligence (AI) is the longest in history at 110 pages. At a conference I attended in Brussels soon after its release, Anu Bradford, who famously coined the term “the Brussels effect,” called the order “the mother of all AI legislation.” It spelled out an array of actions for federal agencies to take in their interactions with AI—both as policymakers and as users—within periods of up to 270 days from the order’s release. Now that it has been one year since the EO’s adoption, the agencies have handed in their homework.
Perhaps the most consequential product of these actions is the Office of Management and Budget (OMB) guidance on management of AI systems and applications used by federal agencies. The final guidance, issued in March 2024, defines “rights-impacting” and “safety-impacting” AI for agency decisions that use AI and risk assessments that agencies are required to conduct. The definition of rights-impacting AI includes those that affect “civil rights, civil liberties, or privacy…equal opportunities…or access to or the ability to apply for critical government resources.” Meanwhile, “safety risks” encompass “human life and well-being…climate or environment…critical infrastructure…or strategic assets or resources.”
The OMB guidance will flow into the marketplace for AI through the agencies’ roles as customers. OMB’s definitions of risk resemble the “high-risk” categories regulated in the European Union’s Artificial Intelligence Act, demonstrating significant transatlantic alignment on AI risks taking effect even before the EU legislation.
The OMB guidance does not apply to national security agencies, but the more recent National Security Memorandum and accompanying “Framework to Advance AI Governance and Risk Management in National Security,” issued on October 24, 2024, include a related set of limits to these agencies, alongside steps to promote responsible adoption and use of AI and strengthen the capabilities of agencies in national security.
The limits in the framework include prohibiting certain AI use cases affecting individual rights, such as tracking individuals based on their exercise of protected rights, specifically calling out suppression of free speech or the right to counsel, discrimination based entirely on protected categories, and certain inferences based on biometric data. It also limits the use of AI for intelligence products and decision-making to ensure that these generally involve human judgment and that significant use of AI is apparent. The framework also establishes categories of “high-impact uses cases” and “use cases impacting federal personnel,” which require a variety of procedural safeguards to be deployed. National security agencies are directed to put in place risk assessment practices for high-impact and new AI uses within 180 days. The use of both risk categories and prohibited use cases, along with the emphasis on individual rights and limits to government use, resembles the EU AI Act.
Anu Bradford was right.
The EO has done little to rein in Big Tech
In the year since President Biden released a pioneering Executive Order (EO) on artificial intelligence (AI), the AI market has created more than $214 billion in revenues, as AI tools seemingly invade every part of our life—from how we work and study to how we create and shop. In the appalling absence of legislation by Congress to meaningfully rein in the tech giants as they rapidly roll out AI products with their typical “move fast and break things” ethos, the Biden administration put forth detailed guidance to the federal agencies and offices about how they can use, acquire, and oversee an increasingly vast range of AI systems.
The EO is a detailed roadmap for how to put responsible AI principles into practice. According to the administration’s own report card, the agencies completed each of the 150 requirements related to federal policies, practices, and procurement across a range of sectors including employment, housing, law, safety, civil rights, and international collaboration. Spelling out what the administration means by safety and rights impacts allows each agency to assess their progress, which they have done in a plethora of accompanying frameworks, reports, and guidance.
However, I am concerned the EO does little to mitigate the monopolistic power of a handful of tech giants. The U.S. government cannot become further dependent on only a few corporate behemoths. To help mitigate these concerns, governing cloud providers as public utilities and data as a public good should be front and center in efforts to codify the EO in laws and regulation.
The EO implemented important steps towards safer and more trustworthy AI systems. Yet, we have heard virtually nothing about the EO’s progress throughout the election cycle, despite AI’s implications for job security in the coming years. Moreover, the tech giants cannot be expected to regulate themselves. Although significant progress has been made in some domains, voluntary commitments have been shown to be insufficient. Indeed, surveillance and data collection have intensified over the past year amid ongoing privacy, contract, and copyright violations as AI corporations seek to build bigger and better AI systems. They have done all of this without internalizing the societal or environmental costs of doing so.
The government must leverage its procurement power to source only responsible, ethical AI that adheres to labor and environmental standards, while respecting copyright and privacy. Similarly, advancing AI and integrating it into the government without any laws to ensure competition and protection of the public interest will further entrench the power of the handful of Big Tech corporations that have access to the data, compute, and talent needed to develop and deploy advanced AI systems.
The National Security Memorandum on AI is a missed opportunity
In July 2024, the U.N. Secretary-General said bluntly, “Machines that have the power and discretion to take human lives are politically unacceptable and morally repugnant, and should be banned by international law.” More specifically, he called for “the conclusion, by 2026, of a legally binding instrument to prohibit lethal autonomous weapons systems that function without human control or oversight and that cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems.”
The U.S. Executive Order on AI issued one year ago, on October 30, 2023, did not mention lethal autonomous weapons, deferring instead to the Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy (“Political Declaration”) that was re-released at about the same time.
On October 24, 2024, almost exactly a year after the AI Executive Order, the White House released its National Security Memorandum on AI. It is largely focused on what the U.S. needs to do to avoid “losing ground” and “ceding [its] technological edge” to “strategic competitors.” It also establishes processes for AI safety evaluations and internal risk management systems and advocates a “stable and responsible international AI governance landscape,” to implement these procedural precautions. But it refers only indirectly to lethal autonomous weapons through a nod to the Political Declaration.
But this Political Declaration dodges the issue of lethal autonomous weapons, committing countries only to follow international law in their use of AI weapons and to ensure that their officials “appropriately oversee,” “exercise appropriate care,” and make “appropriate context-informed judgments” about the development and deployment of AI weapons.
On October 29, 2024, the U.S. noted that almost 60 countries had signed the Political Declaration, but it is hard to see the U.S. exercising international leadership on AI while avoiding the central question of lethal autonomous weapons in this way.
As law professor Charlie Trumbull pointed out recently, the U.S. does not favor a treaty to ban specific classes of weapons or to impose a substantive obligation regarding the use of autonomous weapons, such as a requirement to maintain “meaningful human control.”
But the U.N. Secretary-General’s July announcement indicates that a legally binding instrument on AI weapons is coming. As Professor Turnbull suggests, the U.S. should engage in this process, rather than resisting or ignoring it. The National Security Memorandum was a perfect opportunity to grab the mantle of leadership on this vital international AI issue. The U.S. missed it.
The OMB AI memos are a key step for protection against AI harm, if agencies comply
Buried towards the end of the long AI Executive Order was an impactful provision that required the Office of Management and Budget (OMB) to issue guidance to federal agencies on their use and procurement of AI. The final guidance to agencies came out this past spring (M-24-10), and the procurement guidance (M-24-18) came out this October. Taken together, these have the potential to put in place safeguards that can protect against ineffective or discriminatory government use of AI.
There are a few key pieces to the released guidance. First, the guidance is for agency use of AI, including procurement, and identifies specific AI systems that require additional protective actions based on their use cases. Specifically, the guidance defines safety- and rights-impacting AI systems as those with uses that have a significant impact on people’s safety and/or rights. Critically, in addition to these definitions, the guidance also identifies use cases that are presumed to be safety-impacting or rights-impacting. This includes safety-impacting AI systems such as those used for “controlling the physical movements of robots or robotic appendages within a workplace” or “controlling the transport, safety, design, or development of hazardous chemicals or biological agents.” Systems presumed to be rights-impacting systems include those used for “monitoring tenants in the context of public housing,” “pre-employment screening,” and “identifying criminal suspects or predicting perpetrators’ identities.” This scoping appropriately puts the attention toward use cases that could have serious negative impact on public safety and/or rights while also providing the specificity necessary for agency action.
Second, the guidance identifies specific minimum practices that agencies must put in place in the use (including via procured AI) of safety-impacting or rights-impacting AI. For example, both safety- and rights-impacting AI must be tested based on the real-world context in which they will be deployed, and rights-impacting AI must have demographic disparities assessed and mitigated. In addition, agencies must release plans describing in detail how they will comply with this guidance (and many have). While these steps are appropriately described as “minimum practices,” thoughtful implementation of these practices could be a large step forward in government use of AI while protecting the public.
These are key steps—but will they work? Agencies must report information about their AI use and comply with the guidance by December 1, 2024. Waivers and extensions are available, as is the potential for agency interpretation of the guidance in a way that narrows its scope. Under former President Trump’s Executive Order 13960, agencies were already required to publicly report their AI use cases. However, for example, the DOJ only reported four AI use cases in their 2023 inventory, none of which were facial recognition, despite a GAO report finding extensive DOJ use of facial recognition AI. Will we see more complete reporting and compliance by December 1 ?
More needs to be done
The 2023 Executive Order (EO) has driven essential progress toward responsible and trustworthy AI use. But, one year later, it still requires strong legislative backing from Congress to safeguard it from partisanship and to codify many of the important risk management frameworks and task forces that it established across government agencies. Many leaders caution that not doing so will limit the EO’s scope to federal agency use and procurement, leaving critical gaps. Without legislative action, the safety measures risk becoming outdated or insufficient as AI technology continues to evolve rapidly. With China at the center of concern when it comes to AI and national security, Congress must create a responsive regulatory framework that keeps pace with technological change, ensuring that innovation can advance with the appropriate guidance and guardrails. And if they don’t, states have already signaled their interests in addressing what AI governance should look like, making it harder to pass national laws.
While the EO also lays the groundwork for addressing equity and civil rights in AI, it still lacks the necessary enforcement to ensure consistent protection. Questions remain about how civil rights violations will be identified, monitored, and rectified, particularly in complex and often opaque AI systems. The U.S. Department of Justice and other federal agencies are tasked with overseeing civil rights in this domain, but Congress must step in to establish clear, enforceable accountability measures. With a legislative push, safeguards can be solidified to protect all communities and build trust in the AI systems shaping society.
Going forward, a Harris administration would likely continue with policies emphasizing AI safety and responsibility, potentially expanding on existing initiatives—including those focused on civil rights. Conversely, Donald Trump has already indicated plans to revoke the Executive Order if elected, a move that aligns with his broader deregulatory stance. What has become increasingly clear to the global community is that some form of AI governance matters, and time will soon tell if the U.S. will embody this leadership.
-
Acknowledgements and disclosures
The authors would like to thank Joshua Turner and Isabella Panico Hernández for editorial, research, and copyediting assistance.
Commentary
1 year later, how has the White House AI Executive Order delivered on its promises?
November 4, 2024