Sections

Commentary

The European Commission considers new regulations and enforcement for “high-risk” AI

European Union flags flutter outside the EU Commission headquarters in Brussels, Belgium June 20, 2018. REUTERS/Yves Herman - RC1198648440

Last week, the European Commission (EC) released a white paper that seeks to ensure societal safeguards for “high-risk” artificial intelligence (AI). The number of large-scale and highly influential AI models is increasing in both the public and private sector, and so the EC is seriously considering where new regulations, legislative adjustments, or better oversight capacity might be necessary. These models affect millions of people through critical decisions related to credit approval, insurance claims, health interventions, pre-trial release, hiring, firing, and much more. While facial recognition, autonomous weapons, and artificial general intelligence tend to dominate the conversation, the debate on regulating more commonplace applications is equally important.

The new white paper echoes the principles of the earlier AI Ethics Guidelines: non-discrimination, transparency, accountability, privacy, robustness, environmental well-being, and human oversight. This new paper goes beyond many prior AI ethics frameworks to offer specific AI regulatory options. Some of these options would be alterations to existing EU law, such as ensuring product liability law can be applied to AI software and AI-driven services.

More noteworthy, however, is the proposal to consider entirely new requirements on high-risk AI applications. The high-risk categorization is limited to specific use-cases within specific sectors where there are particularly large stakes. The report explicitly names sectors such as transportation, healthcare, energy, employment, and remote biometric identification, but others like financial services could be included. Within these sectors, only especially impactful AI applications would receive the label “high-risk” and accompanying oversight. So, while a healthcare allocation algorithm may be included, a hospital’s AI-enabled scheduling software would probably not qualify.

The report details a series of possible oversight mechanisms for applications deemed high-risk AI. Some of these would set standards for the use of AI, such as using representational training data and meeting defined levels of model accuracy and robustness. Others require storage of data and documentation, potentially enabling government auditing of AI models. Transparency measures are also under consideration. These might require reporting to regulatory authorities (e.g. an analysis of bias for protected classes) or directly to consumers affected by the model (e.g. an individualized explanation for their model outcome). Not all these requirements would apply to all high-risk AI, but instead some subset of these mechanisms would be paired with each high-risk application.

In weighing how these mechanisms might work, it’s valuable to contemplate how various interventions might affect prominent instances of AI harms. For instance, would enabling audits slow the proliferation of pseudoscientific hiring software across human resources departments? Would reporting requirements help identify discriminatory patient treatment in healthcare allocation algorithms? Would a more rigorous testing process of Tesla’s autonomous driving have made them more resistant to the stickers that trick the vehicles into driving at dangerous speeds? These are questions that the EC paper is raising—questions that the U.S. policy-makers should be asking, too. Given a type of algorithm being used for a particular high-risk purpose, what oversight mechanisms might ensure that it functions in a legal and ethical way?

Given a type of algorithm being used for a particular high-risk purpose, what oversight mechanisms might ensure that it functions in a legal and ethical way?

While the EC paper is exploring new requirements, it also makes clear that enforcing extant law is difficult due to the complexity and opacity of AI. It takes specific expertise in programming and statistics to evaluate the fairness and robustness of AI models, which regulatory agencies across the EU may not yet have. This is very likely an issue in the United States, too. AI models can easily run afoul of many federal requirements, such as the Civil Rights Acts, the Americans with Disabilities Act, the Fair Credit Reporting Act, the Fair Housing Act, and financial modeling regulations. It is not clear that U.S. regulatory agencies are staffed to handle this emerging challenge.

The EC paper notes that investing in its ability to enforce AI safeguards has real advantages for industry, too. The European approach argues that responsible regulation will build public trust in AI, allowing companies to build automated systems without losing the confidence of their customers. Broadly speaking, the EC’s perspective is positive about the emergence of AI as a general-purpose technology. It presents AI as a powerful tool to improve scientific research, drive economic growth, and make public services more efficient. They seek to attract €20 billion ($21.7 billion USD) in annual funding for AI, some of which would come from expanding EU spending. This effort would also be bolstered by an ambitious strategy to incentivize data sharing and expand access to cloud infrastructure.

The EC is poised to consider meaningful new oversight measures over the rapid proliferation of automated decision making, while also expanding its investments in AI innovation and adoption. In the United States, there is already general consensus about the value of AI—both Senator Chuck Schumer and President Trump have recently proposed massive new AI investments. However, there seems to be less interest in establishing a robust regulatory framework. U.S. policy makers should reconsider and join the EC is examining these policy questions, such as: defining high-risk AI; matching regulatory mechanisms with specific AI applications and circumstances; and exploring how to build government capacity.

Since the EC’s new president, Ursula von der Leyen, has committed to introducing AI legislation in her first 100 days, the EC white paper could kickstart serious negotiations among EU member states. This is an important conversation to watch, signaling a substantial shift away from the nebulous “AI ethics” and towards meaningful oversight of automated decision-making systems.