Sections

Commentary

How the AI Executive Order and OMB memo introduce accountability for artificial intelligence

Sorelle Friedler,
Sorelle Friedler
Sorelle Friedler Shibulal Family Associate Professor of Computer Science - Haverford College
Janet Haven, and
Janet Haven
Janet Haven Executive Director - Data & Society
Brian J. Chen
Brian J. Chen
Brian J. Chen Policy Director - Data & Society

November 16, 2023


  • A sweeping new executive order and corresponding guidance from the Office of Management and Budget (OMB) are an important step toward AI accountability.
  • Both documents prepare the federal government to be a model for accountable AI and forecast an iterative future for AI governance.
  • This is an important step, but congressional action is still required to enshrine protections and accountability in law.
President Joe Biden and Vice President Kamala Harris at an event where the president signed an Executive Order regarding Artificial Intelligence (AI) at the White House in Washington, DC. (Photo by Michael Brochstein/Sipa USA/REUTERS)
President Joe Biden and Vice President Kamala Harris at an event where the president signed an Executive Order regarding Artificial Intelligence (AI) at the White House in Washington, DC. (Photo by Michael Brochstein/Sipa USA)No Use Germany.

President Biden recently signed the Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. With sections on privacy, content verification, and immigration of tech workers (to name just a few areas), the executive order is sweeping. Encouragingly, it introduces key guardrails for the use of AI and takes important steps to protect peoples’ rights. It is also inherently limited: Unlike acts of Congress, executive actions cannot create new agencies or grant new regulatory powers over private companies. (They can also be undone by the next president.) The EO was followed two days later by a draft memorandum, now open for public comment, from the Office of Management and Budget (OMB) with additional guidance for the federal government to manage risks and mandate accountability while advancing innovation in AI. Taken together, these two government directives offer one of the most detailed pictures of how governments should establish rules and guidance around AI.

Notably, these actions towards accountability focus on current harms and not existential risk, and thus can serve as useful guides to policymakers focused on the everyday concerns of their constituents. Beyond executive action, with its inherent limits, the next step will be for other policymakers—from Congress to the states—to use these documents as a guide for future action in requiring accountability in the use of AI.

As we analyze the EO and the OMB memo alongside each other for accountability directions, here is what stands out:

  • Both documents mandate hard accountability. Rather than voluntary standards and company commitments, the EO directs federal agencies to enforce civil rights protections to protect against algorithmic discrimination. It also requires companies developing next-generation AI models to report to the federal government, on an ongoing basis to ensure that they meet certain safety, evaluation, and reporting procedures. For example, in the draft OMB memo points to a minimum bar of safety and rights protections that agencies themselves must comply with in order to use the technology. It provides clear definitions for safety- and rights-impacting AI and includes lists of specific systems that should be presumed to be safety or rights-impacting. The emphasis is on creating guardrails based on impact to protect against the potential harms of systems such as hiring algorithms, criminal risk assessments, and medical AI devices.
  • The EO and draft OMB memo set the federal government up to be a model for accountable AI. The government’s regulation of its own use of AI through these two documents, meant to be read together, is significant. In the absence of legislation, the government is using its power to shape the market and model a path for private industry and potential future legislation.
  • AI governance is iterative; more is yet to come. The EO and the OMB memo build substantively on earlier Biden administration efforts like the Blueprint for an AI Bill of Rights, released by OSTP in October 2022, and NIST’s January 2023 AI Risk Management Framework. This EO directs federal agencies to develop additional guidance—for instance, on accountability for generative AI and specific guidance for procurement officers. We’ll be seeing that guidance come out over the next year from a number of agencies.
  • Congress needs to act in order to enshrine rights and other protections in law. The Executive Branch has moved the debate forward substantially. But without protections and hard accountability measures enshrined in law, the governance of AI remains subject to the priorities of future administrations, and huge gaps remain in regulating companies. These documents provide useful guidance for the legislation still needed, including key definitions and procedures for safety- and rights-impacting systems given in the OMB memo.
Impact on government use of AI

The executive order (in Section 10.1(b)) gives explicit guidance to federal agencies for using AI in ways that protect safety and rights. The section outlines contents of the draft OMB memo released for public comment two days after the EO. In what may become a model for AI governance from localities, to states, to international governing agreements, the OMB memo, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, requires specific AI guardrails.

Critically, the memo includes definitions of safety- and rights-impacting AI as well as lists of systems presumed to be safety- and-rights impacting. This approach builds on work done over the past decade to document the harms of algorithmic systems in mediating critical services and impacting people’s vital opportunities. By taking this presumptive approach, rather than requiring agencies start from scratch with risk assessments on every system, the OMB memo also reduces the administrative burden on agencies and allows decision-makers to move directly to instituting appropriate guardrails and accountability practices. Systems can also be added or removed from the list based on a conducted risk assessment.

Once an AI system is identified as safety- or rights-impacting, the draft OMB memo specifies a minimum set of practices that must be in place before and during its use. As required by the executive order, these practices build on those identified in the Blueprint for an AI Bill of Rights. This detailed section of the memo leads off with “impact assessments” and lists three key areas that agencies must assess before a system is put into use: intended purpose and expected benefit; potential risks to a broad range of stakeholder groups; and quality and appropriateness of the data the AI model is built from. Should the assessing agency conclude that the system’s benefits do not meaningfully outweigh the risks, “agencies should not use the AI.” The memo also directs agencies to assess, through this process, whether the AI system is fit for the task at hand; this is a critical effort to make sure AI actually works, when many times it has been shown not to, and to assess whether AI is the right solution to the given problem, countering the tendency to assume it is.

The OMB memo goes on to require a range of accountability processes, including human fallback, the mitigation of new or emerging risks to rights and safety, ongoing assessment throughout a system’s lifecycle, assessment for bias, and consultation and feedback from affected groups. Taken together, if carried through to the final version of the memo, these requirements create a remarkable step forward in establishing an accountability ecosystem—not one point of intervention, but many methodologies and practices that, working together over time and at multiple stages in an AI lifecycle, could represent meaningful controls.

Importantly, the OMB memo requires agencies to stop using an AI system if these practices are not in place. The minimum practices additionally include instructions to reconsider use of a system if concerning outcomes, such as discrimination, are found through testing.

Public accountability will be challenging, given the breadth and complexity of these practices. One key accountability mechanism used will be annual reporting, as part of an expanded AI use case inventory. However, the details of what will be reported were not included as part of the memorandum and will be determined later by OMB. Journalists and researchers have identified problems with the previous practices of the AI use case inventory, including both that agencies left known AI uses off their inventory and that the reporting requirements were minimal and did not include testing and bias assessment results. Looking forward, effectiveness of the AI use case inventory as an accountability mechanism will depend on whether existing loopholes and under-reporting concerns are addressed through the OMB process to come.  It’s also important to consider that the effectiveness of transparency reporting on AI systems as an accountability mechanism has also been more broadly challenged.

Throughout the guidance, OMB refers to requirements for government “use of AI.” This phrase, importantly, covers both AI that is developed and then used by the federal government, and AI that is procured by the government. By using the power of the government’s purse, the guidance also has the potential to influence the private sector as well. OMB also commits to developing further guidance for AI contracts that aligns with what it has laid out so far in this draft memo. That current guidance is rigorous; if those same provisions are successfully required for government purchasing of AI, it will significantly shape how government AI vendors are building and testing their products.

Impact on the private sector

The president only has so many levers to pull through an executive order to regulate private industry. Because the EO cannot make new laws, it relies on existing agency and presidential authorities (and the development of procurement rules described above) to influence how private companies are developing and deploying AI systems. Within that scope, the regulatory impact of the EO on the private sector could still be far-reaching.

The EO directs agencies with enforcement powers to deepen their understanding of their capacities in the context of AI, to coordinate, and to develop guidance and potentially additional regulations to protect civil rights and civil liberties in the broader marketplace—as well as to protect consumers from fraud, discrimination, and other risks, including risks to financial stability, and specifically to protect privacy. Sections 7 through 9 address various aspects of this, starting by directing the attorney general to assemble the heads of federal civil rights offices, including those of enforcement agencies, to determine how to apply and potentially expand the reach of civil rights law across the government to address existing harms.

Additionally, the President calls on Congress to pass federal data privacy protections, and then through the EO’s Section 9 directs agencies to do what they can to protect people’s data privacy without Congressional action. The section opener calls out not only “AI’s facilitation of the collection or use of information about individuals,” but also specifically “the making of inferences about individuals.” This could open up a broader approach to assessing privacy violations, along the lines of “networked privacy” and associated harms, which considers not only individual personal identifiable information but the inferences that can be drawn by looking at connected data about an individual, or relationships between individuals.

The EO directs agencies to revisit the guidelines for privacy impact assessments in the context of AI, as well as to assess and potentially issue guidelines on the use of privacy-enhancing technologies (PETs), such as differential privacy. Though brief, the EO’s privacy section pushes to expand the understanding of data privacy and the remedies that might be taken to address novel and emerging harms. As those ideas move through government, they will inevitably inform potential data protection and privacy laws at the federal and (more likely) state level that will govern private industry.

Generative AI

It’s not surprising that generative AI was given a prominent treatment in the executive order: systems like ChatGPT that can generate text in response to prompts and other systems that can generate images, video, or audio, have catapulted concerns about AI into the public consciousness. Concerns have ranged from the technology’s potential to replace skilled writers to its reinforcement of degrading stereotypes to the overblown notion that it will end humanity as we know it. Yet these systems are largely created by the private sector, and without new legislation the White House has limited levers to require these companies to act responsibly. There is an unfolding, live debate about whether to treat generative AI systems differently than other AI systems. The EO’s authors choose to differentiate generative AI in Section 4, and have drawn criticism for that decision; a better approach may have been the one taken in the OMB memo where the same protections are required for generative AI as other AI and the focus is on the potential harms of the system.

To govern generative AI systems, the executive order invokes the Defense Production Act. Introduced during the Korean War and also used for production of masks and ventilators during the COVID pandemic, the Defense Production Act gives the president the authority to expedite and expand industrial production in order to promote national defense. The executive order (in Section 4.2(i)) uses it to require private companies to preemptively test their models for specific safety concerns; it also specifies “red-teaming” as the testing methodology. Red-teaming is a practice of having a team external to the development of a system (but potentially still within the company) stress-test the system for specific concerns. The executive order requires that companies perform red-teaming in line with guidance from NIST that will be developed per Section 4.1(ii). Companies must report the resulting documentation of safety testing practices and results to the federal government.

This AI accountability model—preemptive testing according to specific standards and associated reporting requirements—is potentially useful. Unfortunately, the specifics in this case leave much to be desired. First, given the use of the Defense Production Act, the testing and reporting the EO requires are limited to concerns relating to national defense and the protection of critical infrastructure, including cybersecurity and bioweapons. Yet as public debate has shown, concerns about generative AI go well beyond these limited settings. Second, the specific definitions used in the executive order to determine which systems must adhere to these standards appear to have been copied wholesale from a policy document put forth by OpenAI and other authors. Its thresholds for model size have little substantive justification; this means that future technological developments may render them under-inclusive or otherwise ineffective in targeting the systems with the most potential for harm. Finally, the executive order positions AI red-teaming as the singular AI accountability mechanism to be used for generative AI, when AI red-teaming works best in combination with other accountability mechanisms. By contrast, the OMB guidance for AI use by the federal government, which will also be required for generative AI, requires multiple accountability mechanisms including algorithmic impact assessments and public consultation. The full landscape of AI accountability mechanisms should be applied to generative AI by private companies as well.

Protecting workers

Consistent with the EO’s broad approach, the order addresses AI’s worker impacts in multiple ways. First, while research suggests a more complicated picture on technological automation and work, the EO sets out to support workers during an AI transition. To that end, the EO directs the chairman of the president’s Council of Economic Advisers to “prepare and submit a report to the president on the labor-market effects of AI.” Section 6(a)(ii) mandates that the secretary of labor submit to the president a report analyzing how federal agencies may “support workers displaced by the adoption of AI and other technological advancements.”

Alongside the focus on AI displacement, the EO recognizes that automated decision systems are already in use in the workplace and directs attention to their ongoing impacts on job quality, worker power, and worker health and safety. The most encompassing directive lies in Section 6(b), which directs the secretary of labor, working with other agencies and “outside entities, including labor unions and workers,” to develop “principles and best practices” to mitigate harms to employees’ well-being. The best practices must cover “labor standards and job quality,” and the EO further “encourages” federal agencies to adopt the guidelines in their internal programs.

Section 7.3 of the EO directs the labor department to publish guidance for federal contractors regarding “nondiscrimination in hiring involving AI and other technology-based hiring systems.” Given the overwhelming evidence that algorithmic systems replicate and reinforce human biases, the broad language of “other technology-based hiring systems” is a major opportunity for the DOL to model standards of nondiscriminatory hiring.

While the EO’s worker protections are only guidance and best practices, the OMB memo directly mandates protocols to support workers and their rights when agencies use AI. The memo applies the “minimum risk management practices” where AI is used to determine “the terms and conditions of employment.”  This broad definition positions the federal government, as the nation’s largest employer, to influence the use of AI systems within the workplace. The memo also requires that human remedies are in place in some cases, a requirement that may add jobs, adding complexity to concerns about the labor-market effects of AI. Further, the OMB memo’s requirement that federal agencies “consult and incorporate feedback from affected groups” positions workers and unions to influence the deployment of AI technology, which aligns with calls from civil society and academia to ensure that the people most likely to be affected by technology should have influence into that system’s design and deployment.

How will this all get done?

The narrative that the federal government is not knowledgeable about AI systems should be laid to rest by these recent documents. There was clearly a lot of thought put into the design and implementation of a national AI governance model. That said, it’s also clear that many more people representing the right mix of expertise will be needed quickly to implement this ambitious plan on the tight timeline laid out in the order—and on the implicit deadline marked by the end of the Biden administration’s first term. Given that the EO and the OMB memo collectively run to well over 100 pages of actions that the federal government should take to address AI, the question looms: who will do all this work?

A major new role addressed in both the EO and the OMB memo is that of the Chief AI Officer (CAIO), which every agency head is required to designate within 60 days of the EO’s enactment. The CAIO’s responsibilities are laid out in the OMB memo and fall into three categories: coordinating agency use of AI, promoting AI innovation, and managing risks from AI use. The way the CAIO role is understood and filled will be critical to what comes next; if agencies interpret the role as solely or primarily a technical one, rather than one focused societally on opportunities and risks related to the public interest use of AI, they may pursue very different implementation priorities than those articulated by the EO. CAIOs are also responsible for agency-level AI strategies, which are due within one year of the EO’s launch. The strategies seem likely to call for increased headcount and new expertise in government.

The EO has anticipated the need for both bringing new talent into the government and building the skills and capacities of civil servants on AI matters. The federal government has long been criticized for its slow, difficult hiring processes, making it tremendously challenging for an administration to pivot attention to an emerging issue. This administration has tried to preempt this criticism through the announcement of “AI talent surge” specified in Section 10.2 of the EO. That section gives OSTP and OMB a spare 45 days to figure out how to get the needed people into government, including through the establishment of a cross-agency AI and Technology Talent Task Force. The federal government has already started some of that recruitment push in the launch of a new AI jobs website.

What is potentially most challenging in recruiting “AI talent” is identifying the actual skills, capacities, and expertise needed to implement the EO’s many angles. While there is a need, of course, for technological talent, much of what the EO calls for, particularly in the area of protecting rights and ensuring safety, requires interdisciplinary expertise. What the EO requires is the creation of new knowledge about how to govern—indeed, what the role of government is in an increasingly data-centric and AI-mediated environment. These are questions for teams with a sociotechnical lens, requiring expertise in a range of disciplines, including legal scholarship, the social and behavioral sciences, computer and data science, and often, specific field knowledge—health and human services, the criminal legal system, financial markets and consumer financial protection, and so on. Such skills will especially be key for the second pillar of the administration’s talent surge—the growth in regulatory and enforcement capacity needed to keep watch over the powerful AI companies. It’s also critical to ensure that these teams are built with attention to equity at the center. Given the broad empirical base that demonstrates the disproportionate harms of AI systems to historically marginalized groups, and the President’s declared commitment to advancing racial equity across the federal government, equity in both hiring and as a focus of implementation must be a top priority of all aspects of EO implementation.

What the EO doesn’t do

As broad as the EO is, there are critical areas of concern that have either been pushed off to later consideration, or avoided. For instance, the EO includes a national security carveout, with direction to develop separate guidance in 270 days to “address the governance of AI used as a component of a national security system or for military and intelligence purposes”; many applications of AI could potentially fall within those criteria. The EO also doesn’t take the opportunity to ban specific practices shown to be harmful or ineffective; an example where it could have taken further action is in banning the use of affective computing in law enforcement. The EO addresses the potential for AI to be valuable in climate science and the mitigation of climate change; however, it does nothing about AI’s own environmental impact, missing an opportunity to force reporting on energy and water usage by companies creating some of the biggest AI systems. Lastly, the EO sets guidelines for the use of AI by federal agencies and contractors but does not attach any requirements or guidance for recipients of federal grants, such as cities and states.

Finally, the EO addresses research in a number of points throughout the document and references research on a range of topics and through many vehicles, including an National Science Foundation (NSF) Regional Innovation Engine and four NSF AI Research Institutes, to join the 25 already established. Yet the EO doesn’t include major *new* commitments to research funding. A more robust approach to addressing AI research and education in the EO could have been a statement that reframed the national AI research and development field as sociotechnical, rather than purely technical—proactively focused on interdisciplinary approaches that center societal impacts of AI alongside technological advancement. Such a statement would have aligned meaningfully with Vice President Kamala Harris’s November 1st 2023 speech at the UK AI Safety Summit in which she argued for “a future where AI is used to advance the public interest.”

Conclusion

If the administration is indeed committed to seeing AI in “the public interest,” as Vice President Harris indicated, its new EO and OMB guidance are the clearest indication of how it intends to meet that ambition: mandating hard accountability to protect rights, regulating private industry, and moving iteratively, so that governance efforts advance alongside the field of sociotechnical research. But the executive branch can only do so much. Ultimately, the EO can be read–among other ways—as a roadmap for Congress to legislate. Additionally, cities, states, and other countries should understand these new documents as direction-setting and could choose to rapidly align their policies with these documents to create more comprehensive rights and safety protections.

Authors