Sections

Commentary

Key enforcement issues of the AI Act should lead EU trilogue debate

A general view of a voting session at the European Parliament. The European Parliament plans to regulate the use of artificial intelligence (AI) and says it should be subject to stricter rules.

On June 14th, the European Parliament passed its version of the Artificial Intelligence (AI) Act, setting the stage for a final debate on the bill between the European Commission, Council, and Parliament—called the “trilogue.” This trilogue will follow an expedited timeline—the European Commission is pushing to finish the AI Act by the end of 2023, so it can be voted through before any political impacts of the 2024 European Parliament elections. The trilogue will certainly discuss many contentious issues, including the definition of AI, the list of high-risk AI categories, whether to ban remote biometric identification, and others. However, relatively underdiscussed have been the details of implementation and enforcement of the EU AI Act, which differ meaningfully across the different AI Act proposals from the Council, Commission, and Parliament.

The Parliament proposal would centralize AI oversight in one agency per member state, while expanding the role of a coordinating AI Office, a key change from the Commission and Council. All three proposals look to engender an AI auditing ecosystem—but none have sufficiently committed to this mechanism to make it a certain success. Further, the undetermined role of civil liability looms on the horizon. These issues warrant both focus and debate, because no matter what specific AI systems are regulated or banned, the success of the EU AI Act will depend on a well-conceived enforcement structure.

One national surveillance authority, or many?

The Parliament’s AI Act contains a significant shift in the approach to market surveillance, that is, the process by which the European Union (EU) and its member states would monitor and enforce the law. Specifically, Parliament requires one national surveillance authority (NSA) in each member state. This is a departure from the Council and Commission versions of the AI Act, which would enable member states to create as many market surveillance authorities (MSA) as they prefer.

In all three AI Act proposals, there are several areas where existing agencies would be anointed as MSAs—this includes AI in financial services, AI in consumer products, and AI in law enforcement. In the Council and Commission proposals, this approach could be expanded. It allows for a member state to, for example, make its existing agency in charge of hiring and workplace issues the MSA for high-risk AI in those areas, or alternatively name the education ministry the MSA for AI in education. However, the Parliament proposal does not allow for this—aside from a few selected MSAs (e.g., finance and law enforcement), member states must create a single NSA for enforcing the AI Act. In the Parliament version, the NSA even gets some authority over consumer product regulators and can override those regulators on issues specific to the AI Act.

Between these two approaches, there are a few important trade-offs to consider. The Parliament approach through a single NSA is more likely able to hire talent, build internal expertise, and effectively enforce the AI Act, as compared to a wide range of distributed MSAs. Further, the centralization in each member state NSA means that coordination between the member states is easier—there is generally just one agency per member state to work with, and they all have a voting seat on the board that manages the AI Office, a proposed advisory and coordination body. This is clearly easier than creating a range of coordination councils between many sector-specific MSAs.

However, this centralization comes at a cost, which is that this NSA will be separated from any existing regulators in member states. This leads to the unenviable position that algorithms used for hiring, workplace management, and education will be governed by different authorities than human actions in the same exact areas. It’s also likely that the interpretation and implementation of the AI Act will suffer in some areas, since AI experts and subject matter experts will be in separate agencies. Looking at early examples of application-specific AI regulations demonstrates how complex they can be (see for instance, the complexity of a proposed U.S. rule on transparency and certification of algorithms in health IT systems or the Equal Employment Opportunity Commission’s guidance for AI hiring under the Americans with Disabilities Act).

This is a difficult decision with unavoidable trade-offs, but because the approach to government oversight affects every other aspect of the AI Act, it should be prioritized, not postponed, in trilogue discussions.

Will the AI Act engender an AI evaluation ecosystem?

Government market surveillance is only the first of two or three (the Parliament version adds individual redress) mechanisms for enforcing the AI Act. The second mechanism is a set of processes to approve organizations that would review and certify high-risk AI systems. These organizations are called ‘notified bodies’ when they receive a notification of approval from a government agency selected for this task, which itself is called a ‘notifying authority.’ This terminology can be quite confusing, but the general idea is that EU member states will approve organizations, including non-profits and companies, to act as independent reviewers of high-risk AI systems, giving them the power to approve those systems as meeting AI Act requirements.

It is the aspiration of the AI Act that this will foster a European ecosystem of independent AI assessment, resulting in more transparent, effective, fair, and risk-managed high-risk AI applications. Certain organizations already exist in this space, such as the algorithmic auditing company Eticas AI, AI services and compliance provider AppliedAI, the digital legal consultancy AWO, and the non-profit Algorithmic Audit. This is a goal that other governments, such as the UK and U.S., have encouraged through voluntary policies.

However, it is not clear that current AI Act proposals will significantly support such an ecosystem. For most types of high-risk AI, this independent review is not the only path for providers to sell or deploy high-risk AI systems. Alternatively, providers can develop AI systems to meet a forthcoming set of standards, which will be a more detailed description of the rules set forth in the AI Act, and simply self-attest that they have done so, along with some reporting and registration requirements.

The independent review is intended to be based on required documentation of the technical performance of the high-risk AI system, as well as documentation of the management systems. This means the review can only really start once this documentation is completed, which is otherwise when an AI developer could self-attest as meeting the AI Act requirements. Therefore, the self-attestation process is sure to be faster and more certain (as an independent assessment could come back negatively) than paying for an independent review of the AI system.

When will companies choose independent review by a notified body? A few types of biometric AI systems, such as biometric identification (specifically of more than one person, but less than mass public surveillance) and biometric analysis of personality characteristics (not including sensitive characteristics such as gender, race, citizenship, and others, for which biometric AI is banned) are specially encouraged to undergo independent review by a notified body. However, even this is not required. Similarly, the new rules proposed by Parliament on foundation models require extensive testing, for which a company may, but does not need to, employ independent evaluators. Independent review by notified bodies is never strictly required.

Even without requirements, some companies may still choose to contract with notified bodies for independent evaluations. This offering might be provided by a notified body as one part of a package of compliance, monitoring, and oversight services for AI systems—this general business model can be seen in some existing AI assurance companies. This may be especially likely for larger companies, where regulatory compliance is as important as getting new products to market (this is not often the case for small businesses). Adding another wrinkle, it is possible for the Commission to change the requirements to a category of high-risk AI later. For example, if the Commission finds that self-attestation has been insufficient to hold the market for AI workplace management software to account, the Commission can require this set of AI systems to go through an independent assessment through a notified body. This is a potentially powerful mechanism for holding an industry to account, although it is unclear under what circumstances this authority would be used.

By and large, independent assessment of high-risk AI systems by notified bodies might be quite rare. This creates a dilemma for the EU AI Act. The time and effort necessary to implement this part of the law is not trivial. Member states need to establish a notifying authority to approve and monitor the notified bodies, as well as perform registration and reporting requirements. The legislative component is significant too, with 10 of 85 articles concerned with the notifying authority and notified body ecosystem.

This is a significant investment in an enforcement structure that the EU does not plan to use extensively. Further, the notified bodies have no capabilities beyond what MSA/NSAs will have, other than potentially developing a specialization in reviewing specific biometric applications. In the trilogue, EU legislators should consider whether the notified body ecosystem, with its current extremely limited scope, is worth the effort of implementation. Given these limitations, the EU should concentrate on implementing more direct oversight through the MSA/NSAs, which will be to the benefit of the AI Act’s enforcement.

Specifically, this would entail accepting the Parliament proposals to increase the oversight powers of the NSAs by giving them the ability to demand and evaluate not just the data of regulated organizations but also trained models, which are important components of many AI systems. Further, the Parliament also states the NSA can carry out “unannounced on-site and remote inspections of high-risk AI systems.” This expansion of authority would better enable NSAs to directly check that companies or public agencies which self-certified their high-risk AI are meeting the new legal requirements.

What is the impact of individual redress on AI?

The processes for complaints, redress, and civil liability by individuals harmed by AI systems has changed significantly across the various versions of the AI Act. The proposed Commission version of the AI Act from April 2021 did not include a path for complaint or redress for individuals. Under the Council proposal, any individual or organization may submit complaints about an AI system to the pertinent market surveillance authority. The Parliament has proposed a new requirement to inform individuals if they are subject to a high-risk AI system, as well as an explicit right to an explanation if they are adversely affected by a high-risk AI system (with none of the ambiguity of GDPR). Further, individuals can complain to their NSA and have a right to judicial remedy if complaints to that NSA go unresolved, which adds an additional path to enforcement.

While liability is not explicitly covered in the AI Act, a new proposed AI Liability Directive intends to clarify the role of civil liability for damage caused by AI systems in the absence of a contract. Several aspects of AI development challenge pre-existing liability rules, including difficulty ascribing responsibility to specific individuals or organizations as well as the opacity of decision-making by some “black box” AI systems. The AI Liability Directive seeks to reduce this uncertainty by first clarifying rules on the disclosure of evidence. These rules state that judges may order disclosure of evidence by providers and users of relevant AI systems when supported by evidence of plausible damage. Second, the directive clarifies that fault of a defendant can be proven by demonstrating (1) non-compliance with AI Act (or other EU) rules, (2) that this non-compliance was likely to have influenced the AI system’s output, and (3) that this output (or lack thereof) gave rise to the claimant’s damages.

Even if Parliament’s version of the AI Act and the AI Liability Directive are passed into law, it is unclear what the effect of these individual redress mechanisms will be. For instance, the right to an explanation might further incentivize companies to use simpler models for high-risk AI systems, such as choosing tree-based models over more “black box” models such as neural networks, as is the common result of the same requirement in the U.S. consumer finance market.

Even with explanations, it may be challenging for individuals to know that they were harmed because of an AI system, nor is it clear that there will be sufficient legal support services to execute on civil liability for AI harms. Non-profit advocacy organizations, such as Max Schrems’s NOYB, and consumer rights organizations, such as Euroconsumer or BEUC, may assist in some legal cases, especially in an effort to enforce the AI Act. However, non-profits like these can only assist in a small number of cases, and it is hard to know if the average plaintiff will be able to find and afford the specialized legal assistance necessary to prosecute developers and deployers of AI systems. EU policymakers may want to be prudent in their assumptions about how much of the enforcement load can be carried by individual redress.

Enforcement and capacity issues should lead in the “trilogue” debate

There are many other important enforcement issues worth discussion. The Parliament proposed an expanded AI Office, tasked with an extensive advisory role in many key decisions of AI governance. Parliament would also require deployers of high-risk AI systems to perform a fundamental rights impact assessment and mitigate any identified risk—a substantial increase in their role. The Parliament also changed how AI systems would be covered in the legislation, by pairing a broad definition of AI with a requirement that the AI systems pose risks of actual harm in enumerated domains. This leaves the final inclusion decision to NSAs, allowing these regulators to focus their efforts on more impactful AI systems, but also creating new harmonization challenges. All these issues deserve attention and have a common requirement: capacity.

All the organizations involved—the government agencies, the independent assessors, the law firms, and more—will need AI expertise for the AI Act to work effectively. None of the AI Act will work, and in fact it will do significant harm, if its institutions don’t understand how to test AI systems, how to evaluate their impact on society, and how to govern them effectively. The absolute necessity of developing this expertise needs to be a priority for the EU, not an afterthought.

There is little empirical evidence on the EU’s preparedness to enact a comprehensive AI governance framework. However, there are some signals that indicate trouble ahead. Germany, the largest EU member state by population, is falling very behind its timeline in developing digital public services and is also struggling to hire technical talent for new data science labs in its federal ministries. Germany’s leading graduate program in this field (and one of very few in the EU), the Hertie School’s M.S. in Data Science for Public Policy, takes just 20 students per year.

Given this, it is informative that Germany ranks just a bit below the average EU member for digital public services, according to the EU’s Digital Economy and Society Index. France lies just ahead, with Italy and Poland falling notably behind Germany. Of the five most populated countries in the EU, only Spain, with a new regulatory AI sandbox and AI regulatory agency, seems to be well prepared. Although a more systemic study of digital governance capacity would be necessary to really determine EU’s preparedness, there is certainly cause for concern.

This is not to say the EU AI Act is doomed to failure or should be abandoned—it should not. Rather, EU legislators should recognize that improving the inefficient enforcement structure, building new AI capacity, and prioritizing other implementation issues should be a preeminent concern of the trilogue debates. While this focus on enforcement may not deliver short term political wins with the law’s passage, it will deliver effective governance and, eventually, needed legitimacy for the EU.