New White House AI principles reach beyond economic and security considerations

A general view of the White House in Washington in this September 30, 2013. REUTERS/Yuri Gripas

On January 7, the White House Office of Management and Budget released a list of ten principles that federal agencies should follow when developing rules for artificial intelligence. This memo fulfills part of an executive order on artificial intelligence signed by President Trump in February 2019. While the guidance memo echoes the executive order’s call for light-touch regulation, its principles go beyond simply promoting the innovation and adoption of emerging technologies. By calling for increased public engagement, non-discrimination, and transparency, the memo recognizes AI’s impacts on people, as well as the economy and national security. While the guidance memo could inform the national framework for AI policy in the U.S., it does not itself carry the force of agency rulemaking or legislation from Congress. However, as the nation competes globally to adopt AI, these principles provide some criterion to advance adjacent policy goals.

Public trust and participation

The first two principles on the list deal with gaining public trust and soliciting public participation. The memo acknowledges that “AI applications could pose risks to privacy, individual rights, autonomy, and civil liberties” that could hinder the widespread adoption of the technology. If the public believes the risks of artificial intelligence outweigh the benefits, they are less likely to use these applications themselves. For applications like facial recognition, a lack of trust can lead to jurisdictions banning their use. The memo suggests that one way to overcome these risks is by encouraging the exchange of information between agencies and the public about the potential risks and benefits of new technology.  To begin this exchange, the Office of Management and Budget has opened a 60-day public comment period to receive feedback on the memo itself.

Fairness and non-discrimination

The memo also goes on to address the potential for AI to either reduce or amplify historical discrimination. AI algorithms can be designed to recognize and correct for human biases, but if these biases are ignored, AI can make decisions that negatively impact marginalized groups. In cases such as hiring job applicants, algorithms trained on historical data may discriminate based on race, gender, or another legally protected status, especially when they do not account for systemic inequalities in employment. Potential disparate impacts can occur even when information on a protected status is not collected, such as when a zip code acts as a proxy for race. In the White House memo, federal agencies are directed to consider whether their regulations and other actions reduce or increase unlawful discrimination by applications of AI.

Disclosure and transparency

To foster public trust, the memo recommends disclosure of when AI applications are in use and transparency about their potential impacts. It further notes that “what constitutes appropriate disclosure and transparency is context-specific.” In decisions with potentially major consequences, such as criminal justice sentencing, hiring, or access to credit, disclosure and transparency of how and when AI is used could help counteract historical discrimination against marginalized groups. More generally, informing the public about their interactions with AI encourages their participation in the regulatory process. Greater knowledge of when their data is collected and analyzed by artificial intelligence could lead to public conversations about what AI applications and outcomes are appropriate.

Although these principles are only meant to guide AI-related actions of federal agencies, they also provide a framework that future rulemaking or legislation can build upon. Until now, much of the policy discussion surrounding artificial intelligence in the executive branch has centered on its economic impacts on jobs and productivity, or the national security implications of autonomous weapons. While these are important applications of artificial intelligence, its impact is not limited to these policy areas alone. A comprehensive policy approach would acknowledge the ubiquity of AI,  promote its benefits, and manage its risks regardless of the context in which it’s used. A set of principles that includes public trust, fairness, and transparency instructs federal agencies to consider how AI might affect Americans in their daily lives.