Sections

Commentary

Testimony

How the White House Executive Order on AI ensures an effective governance regime

President Biden signed an executive order
President Joe Biden speaks during an event on Artificial Intelligence in the East Room at the White House on October 30, 2023, in Washington, DC. President Biden signed an executive order that put checks on the safety and development of artificial intelligence in the United States. Source: Samuel Corum/Sipa USA/REUTERS
Editor's note:

The following testimony was given before the House Committee on Oversight & Accountability for the White House Overreach on AI hearing on March 21, 2024. Watch the full testimony.

Chairwoman Mace, Ranking Member Connolly, and distinguished members of the Subcommittee on Cybersecurity, Information Technology, and Government Innovation: thank you for the invitation to testify on President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. I am Nicol Turner Lee, a senior fellow in Governance Studies and director of the Center for Technology Innovation at the Brookings Institution. With a history of over 100 years, Brookings is committed to evidence-based, nonpartisan research in a range of focus areas. My research expertise encompasses data collection and analysis around regulatory and legislative policies that govern telecommunications and high-tech industries, along with the impacts of digital exclusion, artificial intelligence, and machine learning algorithms on vulnerable populations. My forthcoming book, “Digitally Invisible: How the Internet is Creating the New Underclass”, will be published by Brookings Press later this summer.

To understand the White House Executive Order (EO), its objectives, and its impacts, it is important to delve into the governmental context in which it was released and developed. To this end, in addition to summarizing the EO, I also briefly summarize a few crucial government actions preceding and surrounding it: the Blueprint for an AI Bill of Rights, released in October 2022; the NIST AI Risk Management Framework 1.0, released in January 2023; the securing of voluntary commitments by the White House from top AI developers in July 2023; and Office of Management and Budget (OMB) guidance memo, released shortly after the EO in November 2023. That is, our conversation today must reflect this ‘whole of government’ approach toward achieving national guidance as AI becomes both an asset and concern for our national security interests. I also share in my testimony that Congress must quickly act on many of the AI proposals and activities under discussion to ensure that we maintain our status as leaders in the global economy.

The foundational tenets of the White House EO

The National Blueprint for an AI Bill of Rights

In October 2022, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights, which shared a non-binding roadmap for the responsible use of artificial intelligence. The comprehensive document, or the Blueprint, identified five core principles to guide and govern the effective development and implementation of AI systems: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration, and Fallback. Suggestions within this framework include pre-deployment risk and discrimination assessments, required consent with respect to the “collection, use, access, transfer, and deletion” of user data, issuance of plain-language notice and explanation of automated decision-making, and access to human review of automated decisions in some cases. The intent of the Blueprint was to outline the rights of consumers in ways that provided some agency over the autonomous tools and decisions being made on their behalf.

Following the release of the Blueprint, at least five federal agencies adopted guidelines for their own responsible use of automated systems, a few have established their own centers or offices to implement these guidelines, and at least a dozen agencies have issued some sort of binding guidance for the use of automated systems in the industries under their jurisdiction, such as the Federal Trade Commission (FTC) and the Food and Drug Administration (FDA). However, the detail and scope of federal agencies’ full adherence to these activities still varies in terms of timeline and deliverables. According to the recent update on both voluntary and executive-branch government activities, these principles have helped to frame the focus on risk management of AI models, which has been a common concern among industry actors as well.

The National Institute of Standards & Technology (NIST)

In January 2023, the National Institute of Standards & Technology (NIST) issued Version 1.0 of its Artificial Intelligence Risk Management Framework (AI RMF)– a multi-tool for organizations to design and manage trustworthy and responsible artificial intelligence (AI) that is meant to be “voluntary, rights-preserving, non-sector-specific, [and] use-case agnostic.” The AI RMF provides two lenses through which to consider questions around balancing risks and benefits. First, it provides a conceptual roadmap for identifying risk in the AI context – outlining general types and sources of risk relating to AI and enumerating seven key characteristics of trustworthy AI (safe, secure and resilient, explainable and interpretable, privacy-enhanced, fair—with harmful bias managed, accountable and transparent, valid and reliable). Second, it offers a set of organizational processes and activities to assess and manage risk linking AI’s socio-technical dimensions to stages in the lifecycle of an AI system and to the actors involved. Key steps for these processes and activities are “test, evaluation, verification, and validation (TEVV).” The processes and activities are broken down into core functions—to govern, map, measure, and manage.

NIST is also launching a companion “playbook,” which will provide additional suggestions for actions, references, and documentation for the “govern, map, measure, and manage” functions for the AI RMF. As the title “Version 1.0” implies, the document released January 26 is not meant to be NIST’s last word on AI risk management. The agency expects to conduct a full, formal review by 2028, which could produce a Version 2.0. In recent months, NIST has gone further with the launch of the AI Safety Institute Consortium (AISIC), which will bring together stakeholders across industry, academia, and government to jointly develop and diffuse standards, best practices, benchmarks, and more. The Consortium supports the broader initiatives of the AI Safety Institute, also housed at NIST.

Voluntary commitments from the private sector

In July 2023, the White House secured voluntary commitments from seven leading U.S. AI companies–Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI–to ensure safety, security, and trust with advanced AI systems. These include agreements about internal and external security testing on crucial risks such as bio- and cybersecurity as well as broader societal effects, the protection of unreleased model weights, and public reporting of system capabilities, limitations, and guidelines for responsible use. In September 2023, eight additional companies–including IBM, Nvidia, and Palantir–were convened at the White House to agree to these same terms. Such proactive participation of companies suggest that the federal government is not necessarily acting alone on this issue of responsible AI governance and is equally interested in ways to balance the needs of the market with the further design and deployment of autonomous tools. One of the commitments designed by the White House was to develop “robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.” Industry leaders have continued to focus on digital watermarking—in recent months, Google, Adobe, Intel, and Microsoft have joined a coalition dedicated to developing watermarking technology—but it is important to note that efforts to identify digital provenance will have challenges and watermarking is not a foolproof strategy.

Continue reading the full testimony

Watch the full testimony:

 

  • Acknowledgements and disclosures

    Google, Meta, and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and are not influenced by any donation.

  • Footnotes
    1. White House Office of Science and Technology Policy, “Blueprint for an AI Bill of Rights,” October 4, 2022, https://www.whitehouse.gov/ostp/ai-bill-of-rights/.
    2. NIST, “Artificial Intelligence Risk Management Framework (AI RMF 1.0),” January 2023 https://doi.org/10.6028/NIST.AI.100-1.
    3. Ibid, p. 12.
    4. Ibid, p. 20.
    5. The White House, “FACT SHEET: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI,” July 21, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/
    6. Makena Kelly, “Watermarks aren’t the silver bullet for AI misinformation,” The Verge, October 31, 2023, https://www.theverge.com/2023/10/31/23940626/artificial-intelligence-ai-digital-watermarks-biden-executive-order