Sections

Commentary

New White House guidance downplays important AI harms

A general view of the White House in Washington, D.C.

Following a February 2019 executive order, the U.S. Office of Management and Budget (OMB) issued its final guidance on the regulation of artificial intelligence (AI) on November 17, 2020. This document presents the U.S. government strategy towards AI oversight, and as such it deserves careful scrutiny. The White House guidance is reasoned and reflects a nuanced understanding of AI; however, there are also real causes for concern regarding the long-term impact on the regulation of AI.

To start, there are many positive aspects of the White House AI Regulatory guidance. Its scope is reasonable, recognizing that the regulation of private sector AI is fundamentally different than the government’s deployment of AI systems, leaving the latter for a separate document. It also appropriately argues for an approach to AI where regulation is specific to the sector and AI application type, rather than wide sweeping policies that make no sense across the broad spectrum of AI use. The OMB document takes a risk-based approach, suggesting the prioritization of stronger protections for AI systems that demonstrate the potential for higher risk. The document also calls for federal agencies to work with standards bodies, specifically asking them to adhere to the National Institute of Standards and Technology’s federal engagement plan for developing AI technical standards.

Other aspects of the guidance are less positive. The tone of the document is very insistent on the promise of AI development and innovation, especially for economic growth, writing “promoting innovation and growth of AI is a high priority of the U.S. government.” This is understandable, as the digital economy accounts for over 9% of GDP and was growing at 6.8% per year before the COVID-19 pandemic. AI is a big part of that sector, and is expanding into many other sectors, too. Yet while the economic value of AI is important, the guidance is overly focused on arguing that regulation should not hamper its innovation and deployment. The document notes a series of “non-regulatory approaches to AI” and has a section on “reducing barriers to the deployment and use of AI,” but it is not balanced by a broad contextualization of AI harms.

The OMB guidance prominently states “that many AI applications do not necessarily raise novel issues.” This claim is partially true, as the government does not need to be concerned with many AI applications. Yet the reverse is also true: many private sector AI applications absolutely do raise novel issues. AI systems can systematize discrimination and redistribute power away from consumers and front-line employees. AI systems can also enable large-scale corporate surveillance. They can do all this, while making processes harder to understand for individuals, and potentially undermining their legal recourse for harms.

These issues are not entirely ignored in the White House document. It lists ten principles for the stewardship of AI applications, which are consistent with recommendations of leading ethical AI experts. That list includes: “Public Trust in AI”, “Public Participation”, “Risk Assessment and Management”, “Fairness and Non-Discrimination”, and “Disclosure and Transparency.” The problem is that these criteria are framed as a checklist that must be worked through before agencies can implement any new rules on AI.  Directly before the list of principles, the guidance states “Agencies should consider new regulation only … in light of the foregoing section … that Federal Regulation is necessary.” Paired with the broader anti-regulatory framing of the document, this suggests an intent to preempt regulatory actions.

Preemption is problematic because we already know there are areas which require tougher enforcement and regulation. The Food and Drug Administration is considering how to adapt their rules to ensure the safety of AI-enhanced medical devices while still allowing them to be updated with new data. The Department of Labor and the Equal Employment Opportunity Commission will have to look at how algorithmic tools affect worker compensation, workplace safety, and hiring processes. Health and Human Services has to learn to enforce legal non-discrimination protections on algorithmically allocated health services. Similarly, Department of Transportation needs new rules for ensuring safety of autonomous vehicles.

In the absence of updated regulations and enforcement processes, the status quo makes it easy to skirt the law by using algorithms. Overseeing algorithms requires new ideas, technical expertise, and additional capacity, and the White House and OMB should be encouraging agencies to tackle the new risks of AI. Unfortunately, in a finalized rule regarding disparate impact standards, Housing and Urban Development ignored the new challenges of AI, making it impossible for plaintiffs to prove they were discriminated against by algorithms.

Beyond the direct influence on agency rulemaking, this guidance can also affect the future role of OMB, especially its regulatory review arm, the Office of Information and Regulatory Affairs (OIRA). At times in its forty-year history, OIRA has played an active regulatory “gatekeeper” role. This role is especially focused on ‘significant regulator actions,’ which includes rules that are estimated to have an economic impact of over $100 million or alternatively regulations which “raise novel legal or policy issues.” While some AI regulations will likely meet the economic impact criteria, surely many more will raise novel issues in law and policy. The relatively small OIRA staff, split into groups assigned to different federal agencies, may not have the AI expertise necessary to weigh in effectively on these emerging new rules. It is uncertain exactly how this issue will develop, but it is possible that the White House guidance, as interpreted by OIRA, creates new compliance burdens.

It may not take long to find out what impact this guidance will have. Perhaps its most valuable contribution of this guidance is that it calls for federal agencies to provide compliance plans within six months (by May 17, 2021). With the influx of new appointments from the Biden administration, it is possible that this document drives new ideas and an unprecedented degree of action to build sensible and effective AI regulations. Hopefully, OIRA will work primarily to foster knowledge exchange and cooperation between agencies, as Cass Sunstein has argued.

This may be the case. Still, there is cause for concern in the framing of this document, and its effect is going to depend significantly on how it is interpreted and approached by future agency staff. It is hard to imagine that changing this guidance is going to be a leading priority of the Biden White House, given all its other pressing problems. Yet there is a real risk that this document becomes a force for maintaining the status quo, as opposed to addressing serious AI harms.