This piece summarizes the accompanying report, The EU AI Act Will Have Global Impact, but a Limited Brussels Effect.
The European Union’s AI Act (AIA) aspires to establish the first comprehensive regulatory scheme for artificial intelligence, but its impact will not stop at the EU’s borders. In fact, some EU policymakers believe it is a critical goal of the AIA to set a worldwide standard, so much so that some refer to a race to regulate AI. Yet a comprehensive analysis of the provisions of the AIA suggests a more limited global impact from the AIA than is presented by EU policymakers. While the AIA will contribute to the EU’s already significant global influence over some online platforms, it may otherwise only moderately shape international regulation.
Other analyses from the Brookings Institution have offered broad overviews of the legislation, but the AIA’s global impact has been less examined. General analyses suggest the EU may be losing its competitive advantage in digital governance as other countries invest in digital regulatory capacity and catch up to the EU. A more specific examination of the key provisions of the AIA offers insight into the potential extent of a “Brussels effect,” which entails that the EU can unilaterally set rules that become worldwide standards—not through coercion, but through the appeal of its 450 million-strong consumer market. This Brussels effect is often assumed to be the default outcome of EU legislation, but careful consideration of the AIA’s provisions suggests a moderated impact on only some categories of AI systems: namely, high-risk AI in products, high-risk AI in human services, and AI that interacts with humans.
AI-driven products will see global changes, and thus global mediation
The proposed AIA covers a wide range of AI systems used in already-regulated sectors, such as aviation, automotive vehicles, boats, elevators, medical devices, industrial machinery, and more. To continue exporting to Europe’s large consumer market, foreign companies will have to adapt their products to conform to the AIA. Because many of these products are manufactured in large-scale industrialized processes, companies are likely to try to standardize their products to the EU’s rules, thus avoiding multiple different industrial processes and testing procedures for products bound for the EU versus the rest of the world. Once companies adapt to the EU’s rules, they will often have a strong incentive to keep domestic laws as consistent as possible with those in the EU—a classic instance of the Brussels effect. In fact, the EU’s influence has been found most influential when it has the power to exclude products from its market, as it will under the AIA.
However, several important factors may dull the EU’s unilateral global impact on high-risk AI in products. European standardization organizations (ESOs) will set critical standards around many of the AIA’s specific requirements. Companies, nations, and international standards bodies will attempt to influence such processes. This includes the many companies that already sell foreign-made products in the EU, such as US medical devices, Japanese robotic arms, and Chinese vehicles. Many large international businesses will not be regulated out of the EU without any say in the matter, but can be expected to actively engage to make sure the rules don’t disadvantage their market share. International standards bodies, which maintain a “high level of convergence” with the ESOs, also offer another important avenue where the EU’s goals will be moderated by global input. Lastly, foreign governments will also work to affect standards development, such as through the EU-U.S. Trade and Technology Council, which has a working group on technology standards, and perhaps the newer EU-India Trade and Technology Council. This furthers a recent trend, in which countries including the U.S. and China have far more actively engaged in strategic approaches to international standard-setting as a method of international technology competition.
The AIA will drive changes in platforms with AI human services
The second category of high-risk AI systems does not fit into the EU’s pre-existing regulatory framework, but is instead comprised of a specified list of AI systems that affect human rights. These include private-sector AI applications in hiring, employee management, access to education, and credit scoring, as well as AI used by governments, although those are less relevant to this discussion. For these AI systems, the more they are incorporated into a geographically dispersed (rather than localized) platform, the more comprehensively the AIA’s requirements will affect their whole, rather than just their use in the EU.
For example, LinkedIn is an entirely interconnected platform with no geographic barriers, and its algorithms for placing job advertisements and recommender systems for job candidates will qualify as high-risk under the AIA. These AI systems are ingrained in the network of LinkedIn’s users, which makes them difficult to isolate in any clear geographic sense and suggests that changes to the AI system are more likely to be universal to the platform. This also means it will be very difficult for LinkedIn to meet the EU standards as well as a second (hypothetical) set of regulations from another country. Since LinkedIn, and its parent company Microsoft, can be expected to strongly resist additional requirements on AI from other countries that are incongruous with the AIA, this is also a clear manifestation of the Brussels effect.
However, the extraterritorial impact experienced by LinkedIn will be absent for AI human services that are less globally interconnected, but are instead built on local data and have more isolated interactions with individuals, such as the AI hiring systems that analyze resumes and assess job candidate evaluations. Developers may simply comply with AIA rules on AI systems they develop for the EU and may choose to only selectively adopt them elsewhere.
There are some inefficiencies in this approach—companies may need a coding and algorithmic development process that is different for the EU than other countries. However, code is far more flexible than industrial processes, making it much easier for companies to create different workflows for different regulatory environments. It is not as difficult to do that as it is to build two different conveyor belts.
At times, the line between local software and a global platform will be blurred, such as by AI systems in employee management software that are used by a global company with employees in Europe. Still, broadly speaking, the less an AI system is built into an international network or a platform, the less likely it is to be directly affected by the AIA, resulting in a lessened Brussels effect.
What does the AIA’s limited global reach mean?
The lion’s share of the AIA’s global impact will manifest through its rules on high-risk AI systems, but other provisions also matter. The AIA’s transparency obligations will require that that people are informed they are interacting with an AI system, which will likely affect chatbots on websites and in commercial phone applications around the world. The AIA requires registration of high-risk AI in a public database, which will lead to a better understanding of how global companies use AI; consider that Uber and Lyft will have to register the many algorithms they use to manage their drivers. Further, the AIA’s oversight of EU’s government algorithms offers inspiration and a strong template for other countries’ domestic use of AI systems, if not any coercive effect.
While these changes are meaningful, they are hardly a reshaping of global governance of AI. All told, a careful analysis of its provisions broadly suggests only targeted extraterritorial impact, and a limited Brussels effect, from the AIA.
The rest of the world should still prepare for the AIA by considering how AI systems will interact with existing regulations. The U.S. Office of Management and Budget has issued guidance to U.S. agencies to do just this, but agencies have largely not followed through. Further, rather than waiting on the EU and the AIA, governments should begin to implement their own algorithmic protections. In the U.S., the recent guidance from the Equal Employment Opportunity Commission on making AI hiring more fair for people with disabilities can serve as inspiration, as the Biden administration considers how to fulfill the promise of an AI Bill of Rights.
As for the EU, there are many good reasons to pass the AIA—fighting fraud, reducing discrimination, and curtailing surveillance capitalism, among others—but setting a global standard may not be one of them. This conclusion challenges the common European viewpoint that the economic benefits of the Brussels effect are a compelling reason to hurry the legislation to completion. The EU might have more to gain by signaling more openness to feedback from and cooperation with the rest of the democratic world, rather than saying it is racing to regulate—or else the EU may find itself alone at the finish line.