Sections

Commentary

The EU and U.S. are starting to align on AI regulation

European Commission President Ursula von der Leyen departs from the White House following her meeting with U.S. President Joe Biden

A range of regulatory changes and new hires from the Biden administration signals a more proactive stance by the federal government towards artificial intelligence (AI) regulation, which brings the U.S. closer to that of the European Union (EU). These developments are promising, as is the inclusion of AI issues in the new EU-U.S. Trade and Technology Council (TTC). But there are other steps that these leading democracies can take to build alignment on curtailing AI harms.

Since 2017, at least 60 countries have adopted some form of artificial intelligence policy, a torrent of activity that nearly matches the pace of modern AI adoption. The expansion of AI governance raises concerns about looming challenges for international cooperation. That is, the increasing ubiquity of AI in online services and physical devices means that any new regulations will have important ramifications for global markets. The variety of different ways that AI can be trained and deployed also complicates this picture. For example, AI systems may be hosted in the cloud and accessed remotely from anywhere with an internet connection. Retraining and transfer learning enable different teams to jointly develop an AI model with many datasets while working out of multiple countries. Edge and federated machine learning techniques enable physical products around the world to share data that affects the function of their AI models.

These considerations complicate AI governance, although they should not be used as an excuse to eschew necessary protections—the many arguments for which I will not repeat here. An ideal outcome would be the implementation of meaningful governmental oversight of AI, while also enabling these global AI supply chains. Further, a more unified international approach to AI governance could strengthen common oversight, guide research to shared challenges, and promote the sharing of best practices, code, and data.

It is perhaps with this in mind that the September 2021 TTC meeting prominently included a discussion of AI policy. This was by no means guaranteed, as other issues deliberated in the inaugural TTC convening in Pittsburgh have far longer histories as bilateral policy issues, including semiconductors, investment screening, and export controls. Further, government officials who participated in the meeting expressed optimism about shared intentions around AI governance, specifically citing the consensus on both a risk-based approach and over prohibiting extreme cases of governmental social scoring (see Annex III of the EU-U.S. TTC Inaugural Statement).

The extensive engagement of the EU on these issues likely elevated AI policy into the TTC. Most prominently, this proposed AI Act, which would create regulatory oversight for a wide range of high-risk AI applications in both digital services (e.g., hiring and admissions software) and physical products (e.g., medical devices). The AI Act would affect other types of AI, such as by requiring disclosure of low-risk AI systems and banning a few categories of AI, but these will likely result in fewer international trade and regulatory considerations. Although there is still much uncertainty in how the AI Act rules would be enforced, existing regulatory agencies within EU member states are likely to take on much of the work. Debate on the act’s contents is still ongoing; it is also worth noting that, if passed, these new rules could take some time to take effect. Consider the case of General Data Protection Regulation (GDPR). Recent fines on Amazon (€746 million) and WhatsApp (€225 million) for privacy violations demonstrate the EU’s willingness to use its regulatory powers, but most of the significant penalties have come two years after the implementation and four years after the passage of the GDPR. If the AI Act follows a similar timeline, it may be years before significant oversight is in place.

The U.S. revs the regulatory engines

In contrast, gradual U.S. developments have made fewer headlines, but they are aggregating into a meaningful approach to AI regulation. Some agencies, such as the Food and Drug Administration or the Department of Transportation, have been working for years to incorporate AI considerations into their regulatory regimes. In late 2020, the Trump Administration’s Office of Management and Budget encouraged agencies to consider what regulatory steps might be necessary for AI, although it generally urged a light touch.

Since then, policymaking during the Biden administration signals the pace of change has picked up. The Federal Trade Commission (FTC) first published a widely-noted blog post and then started a rulemaking process making it clear that the agency considers issues of AI discrimination, fraud, and related data misuse to be within its purview. Further, the Department of Housing and Urban Development has begun reversing a Trump administration rule that effectively shielded housing-related algorithms from claims of discrimination. In late October, the Equal Employment Opportunity Commission announced it would launch an initiative on enforcing hiring and workplace protections on AI systems. Further, five financial regulators have started an inquiry into AI practices in financial institutions that may affect risk management, fair lending, and creditworthiness. Lastly, the National Institute for Standards and Technology is in the process of developing an AI risk management framework. This list of policy interventions is starting to look a bit like the EU’s perspective on “high-risk” AI. In fact, given that it could take years from passage for the EU to set up and enforce its AI Act, the U.S. may find itself leading in many practical areas of AI regulation.

The expertise of staff joining the Biden administration also signals greater prominence for these issues—especially AI Now Institute co-founder Meredith Whittaker at the FTC, as well as AI harms experts Suresh Venkatasubramanian and Rashida Richardson at the White House Office of Science and Technology Policy (OSTP). To advance its call for an AI Bill of Rights from its leadership, OSTP has also started a public event series on biometric technologies and other hazardous AI. All told, these developments suggest that the Biden administration’s outlook is closer to the EU’s AI oversight goals than many seem to realize.

This trend is not just limited to AI products and services. The Senate’s recent introduction of the Platform Accountability and Transparency Act suggests the potential for more U.S.-EU consensus. Proposed legislation would enable university researchers to work with raw platform data, subject to approval by the National Science Foundation and with corporate compliance enforced by a new FTC office. This mirrors a core provision of the EU’s proposed Digital Services Act, whose passage by the European Parliament seems increasingly likely.

Also relevant, though less specific to AI, is the July 2021 Biden executive order aimed at increasing competition in US markets, which contains many tech-focused provisions. This order, along with the selection of Lina Khan to lead the FTC, convinced EU competition chief Margrethe Vestager that there is a “lot of alignment” between the two governments.

Getting proactive on regulatory cooperation

The emerging policy landscapes on both sides of the Atlantic reflect progress towards a significant governmental role in protecting citizens from AI harms. Yet this shared ambition does not make consistent regulations especially likely. For context, a 2009 analysis documented thousands of instances of regulatory divergence and non-tariff barriers to trade between the EU and U.S. That the ensuing efforts to bring these policies into alignment have gone quite poorly suggests that preventing the incoherence might be the best approach. AI regulations, which are likely to include many technical definitions even specific mathematical formulas, are certain to offer many opportunities for honest disagreement.

Beyond circumventing barriers to trade, consistent approaches may also strengthen government oversight. Enforcement of similar AI regulations by multiple governments can increase the odds that the worst offenses, at least by international businesses, are caught. Further, consistent governmental priorities send a clear signal to the civil society and academic communities in the EU and the U.S., directing investigations and research to shared concerns.

There are many incremental and feasible steps that the EU and U.S. can consider in order to set the stage for long-term AI policy coherence. Working towards common definitions of AI, even just for regulatory purposes, would be a good start. Encouraging more information sharing, such as between national standards bodies (as NIST and CEN-CENELEC have done on other technologies) could be another easy step. As regulatory responsibilities take shape, encouraging communication and collaboration between sector-specific regulators may prevent difficulties going forward. This could be facilitated by tasking a central office (such as Directorate General for Trade and the Department of Commerce) as an international AI regulatory coordinator, which advise agencies on how to avoid conflicting future rules. If these steps prove fruitful, the EU and U.S. can work towards consistent processes and criteria for auditing AI systems.

Even more ambitious would be a joint approach to regulatory sandboxes, where collaborative experimentation and testing of emerging AI systems could help facilitate better and more cohesive regulations. There may even be a shared approach to oversight for online platforms, where the EU and U.S. could agree to enable researchers to study pooled data from both continents, improving our understanding of online harms.

Broadly speaking, the EU and U.S. should include proactive regulatory cooperation within the scope of the TTC and start preparing for a wider international community with significant AI oversight measures. This agenda would also fit nicely into the Biden administration’s emerging portfolio of more active democratic governance of technology, which includes the newly announced U.S.-UK Grand Challenges on Democracy-Affirming Technologies and plans to restrict the exporting of surveillance technologies to authoritarian governments.