Sections

Commentary

The U.S. and EU should base AI regulations on shared democratic values

U.S. and European Union flags are pictured during the visit of Vice President Mike Pence to the European Commission headquarters in Brussels, Belgium February 20, 2017. REUTERS/Francois Lenoir - RTSZI3F

Artificial intelligence (AI) is transforming how economies grow, generate jobs, and conduct international trade. The McKinsey Global Institute estimates that AI could add around 16 percent, or $13 trillion, to global output by 2030.

This makes AI a crucial piece of policy concerning digital trade, data flows, and their implications for additional policy issues such as cybersecurity, privacy, consumer protection, and the broader economic impacts of access to data and digital technologies. Governments around the world are responding with plans to promote research and development, AI investment, and trade.

Last week, the European Commission (EC) published a white paper on AI and a data strategy as part of a plan for “shaping Europe’s digital future,” carrying out President Ursula von der Leyen’s objective to coordinate an approach to AI. The paper recognizes that the EU lags China and the U.S. in AI investment, development, and data resources, but sees the EU’s strong manufacturing sector as an opportunity for EU leadership in AI.

The paper outlines the need for EU leadership in developing an “ecosystem of excellence” by mobilizing resources for research and innovation in AI, with the aim of attracting over €20 billion annually for AI over the next decade. It also identifies the need to develop an “ecosystem of trust” by putting in place a regulatory framework that gives citizens, companies, and public organization confidence in using AI. This could include new EU regulation to address cybersecurity risk from AI, improve understanding of how decisions using AI are made, and expand consumer protection regulation to AI services. The EU is also focusing on the need to create European data spaces that can facilitate the use and sharing of data by business and government.

In January 2020, the White House proposed 10 AI regulatory principles to govern the development and use of AI technologies in the private sector. Some of the principles resonate with the EU’s objectives. Direction to federal agencies to avoid regulation that unnecessarily hampers AI innovation and growth could apply to the EU’s white paper drafting as much as U.S. agencies.

If so, the white paper is likely to raise eyebrows in the U.S. In general, the white paper adopts a “risk-based” approach to AI regulation. For sectors and applications that are deemed “high-risk,” the EC outlined an approach that may include setting standards for the quality of AI systems and the likelihood of conformity assessments that could include testing and certification.

The white paper also declared that the EU “will continue to cooperate with like-minded countries, but also with global players.” Overlapping principles between the EC and U.S. announcements offer a basis for such cooperation with the United States. The White House principles include public trust in AI, the costs and risks of AI, and the impact of AI on fairness, discrimination, and security of information as well as on privacy, individual rights, autonomy, and civil liberties. These resemble seven key requirements identified by an EU High Level Group of Experts on AI that are incorporated into the white paper: human agency and oversight, technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability. In fact, the U.S. and the EU both agree on the need for AI regulation, the key challenge will be doing it in way that is effective and prevents unnecessary barriers to transatlantic trade and investment.

The white paper is clear that EU AI regulation will need to apply to all economic operators providing AI-enabled products and services. In other words, all investment and trade with the EU will need to be consistent with EU AI regulation, which given the potential widespread uptake of AI could include significant amounts of trade and investment in goods and services.

The U.S. and the EU are strategic and economic allies as well as primary trading and investment partners that account for almost half of global GDP and each other’s most significant investment destination, even after Brexit. The course of AI will be essential to the continued success of their economic relationship. This underscores the importance of developing a transatlantic dialogue among governments and stakeholders to align AI regulation, with an eye towards developing de facto global standards for AI governance.

Fostering a transatlantic AI marketplace would also be a positive strategic outcome for the EU, the U.S., and for global AI development. In addition to links with NATO and other strategic partners, common democratic values give the United States and the EU a further shared interest in cooperating on AI. This work takes on added significance given China’s focus on leading the AI race, and the likelihood that AI technologies based on Chinese-developed standards would be disadvantageous for the EU and the U.S. The two partners should lead the development and deployment of AI based on their shared values.