Symposium on education systems transformation for and through inclusive education

LIVE

Symposium on education systems transformation for and through inclusive education
Sections

Research

The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment

EU US flags

Table of contents


Executive Summary

The EU and the U.S. are jointly pivotal to the future of global AI governance. Ensuring that EU and U.S. approaches to AI risk management are generally aligned will facilitate bilateral trade, improve regulatory oversight, and enable broader transatlantic cooperation.

The U.S. approach to AI risk management is highly distributed across federal agencies, many adapting to AI without new legal authorities. Meanwhile, the U.S. has invested in non-regulatory infrastructure, such as a new AI risk management framework, evaluations of facial recognition software, and extensive funding of AI research. The EU approach to AI risk management is characterized by a more comprehensive range of legislation tailored to specific digital environments. The EU plans to place new requirements on high-risk AI in socioeconomic processes, the government use of AI, and regulated consumer products with AI systems. Other EU legislation enables more public transparency and influence over the design of AI systems in social media and e-commerce.

The EU and U.S. strategies share a conceptual alignment on a risk-based approach, agree on key principles of trustworthy AI, and endorse an important role for international standards. However, the specifics of these AI risk management regimes have more differences than similarities. Regarding many specific AI applications, especially those related to socioeconomic processes and online platforms, the EU and U.S. are on a path to significant misalignment.

The EU-U.S. Trade and Technology Council has demonstrated early success working on AI, especially on a project to develop a common understanding of metrics and methodologies for trustworthy AI. Through these negotiations, the EU and U.S. have also agreed to work collaboratively on international AI standards, while also jointly studying emerging risks of AI and applications of new AI technologies.

More can be done to further the EU-U.S. alignment, while also improving each country’s AI governance regime. Specifically:

  • The U.S. should execute on federal agency AI regulatory plans and use these for designing strategic AI governance with an eye towards EU-U.S. alignment.
  • The EU should create more flexibility in the sectoral implementation of the EU AI Act, improving the law and enabling future EU-U.S. cooperation.
  • The U.S. needs to implement a legal framework for online platform governance, but until then, the EU and U.S. should work on shared documentation of recommender systems and network algorithms, as well as perform collaborative research on online platforms.
  • The U.S. and EU should deepen knowledge sharing on a number of levels, including on standards development; AI sandboxes; large public AI research projects and open-source tools; regulator-to-regulator exchanges; and developing an AI assurance ecosystem.

More collaboration between the EU and the U.S. will be crucial, as these governments are implementing policies that will be foundational to the democratic governance of AI.

Introduction

Approaches to artificial intelligence (AI) risk management—shaped by emerging legislation, regulatory oversight, civil liability, soft law, and industry standards—are becoming key facets of international diplomacy and trade policy. In addition to encouraging integrated technology markets, a more unified international approach to AI governance can strengthen regulatory oversight, guide research towards shared challenges, promote the exchange of best practices, and enable the interoperability of tools for trustworthy AI development.

Especially impactful in this landscape are the EU and the U.S., which are both currently implementing foundational policies that will set precedents for the future of AI risk management within their territories and globally. The governance approaches of the EU and U.S. touch on a wide range of AI applications with international implications, including more sophisticated AI in consumer products; a proliferation of AI in regulated socioeconomic decisions; an expansion of AI in a wide variety of online platforms; and public-facing web-hosted AI systems, such as generative AI and foundation models.[i] This paper considers the broad approaches of the U.S. and the EU to AI risk management, compares policy developments across eight key subfields, and discusses collaborative steps taken so far, especially through the EU-U.S. Trade and Technology Council. Further, this paper identifies key emerging challenges to transatlantic AI risk management and offers policymaking recommendations that might advance well-aligned and mutually beneficial EU-U.S. AI policy.

The U.S. approach to AI risk management

The U.S. federal government’s approach to AI risk management can broadly be characterized as risk-based, sectorally specific, and highly distributed across federal agencies. There are advantages to this approach, however it also contributes to the uneven development of AI policies. While there are several guiding federal documents from the White House on AI harms, they have not created an even or consistent federal approach to AI risks.

“By and large, federal agencies have still not developed the required AI regulatory plans.”

The February 2019 executive order, Maintaining American Leadership in Artificial Intelligence (EO 13859), and its ensuing Office of Management and Budget (OMB) guidance (M-21-06) presented the first federal approach to AI oversight. Delivered in November 2020, 15 months after the deadline set in EO 13859, the OMB guidance clearly articulated a risk-based approach, stating “the magnitude and nature of the consequences should an AI tool fail…can help inform the level and type of regulatory effort that is appropriate to identify and mitigate risks.” These documents also urged agencies to consider key facets of AI risk reduction through regulatory and non-regulatory interventions. This includes using scientific evidence to determine AI’s capabilities, enforcing non-discrimination statutes, considering disclosure requirements, and promoting safe AI development and deployment. While these documents reflected the Trump administration’s minimalist regulatory perspective, they also required agencies to develop plans to regulate AI applications.

By and large, federal agencies have still not developed the required AI regulatory plans. In December 2022, Stanford University’s Center for Human-Centered AI released a report stating that only five of 41 major agencies created an AI plan as required.[ii] This is a generous interpretation, as only one major agency, the Department of Health and Human Services (HHS), provided a thorough plan in response. HHS extensively documented the agency’s authority over AI systems (through 12 different statutes), its active information collections (e.g., on AI for genomic sequencing), and the emerging AI use cases of interest (mostly in illness detection). The thoroughness of the HHS’s regulatory plan shows how valuable this endeavor could be for federal agency planning and informing the public if other agencies were to follow in HHS’s footsteps.

Rather than further implementing EO 13859, the Biden administration instead revisited the topic of AI risks through the Blueprint for an AI Bill of Rights (AIBoR). Developed by the White House Office of Science and Technology Policy (OSTP), the AIBoR includes a detailed exposition of AI harms to economic and civil rights, five principles for mitigating these harms, and an associated list of federal agencies’ actions. The AIBoR endorses a sectorally specific approach to AI governance, with policy interventions tailored to individual sectors such as health, labor, and education. Its approach is therefore quite reliant on these associated federal agency actions, rather than centralized action, especially because the AIBoR is nonbinding guidance.

That the AIBoR does not directly compel federal agencies to mitigate AI risks is clear from the patchwork of responses, with significant efforts in some agencies and non-response in others. Further, despite the five broad principles outlined in the AIBoR,[iii] most federal agencies are only able to adapt their pre-existing legal authorities to algorithmic systems. This is best demonstrated by agencies regulating AI used to make socioeconomic decisions. This includes the Federal Trade Commission (FTC), which can use its authority to protect against “unfair and deceptive” practices to enforce truth in advertising and some data privacy guarantees in AI systems. The FTC is also actively considering how its existing authorities affect data-driven commercial surveillance, including algorithmic decision-making, and some advocacy organizations have argued the FTC can place transparency and fairness requirements on such algorithmic systems. The Equal Employment Opportunity Commission (EEOC) can impose some transparency, require a non-AI alternative for people with disabilities, and enforce non-discrimination in AI hiring. The Consumer Financial Protection Bureau (CFPB) requires explanations for credit denials from AI systems and could potentially enforce non-discrimination requirements. There are other examples, however, in no sector does any agency have the legal authorities necessary to enforce all of the principles expressed by the AIBoR, nor those in EO 13859.

Of these principles, the Biden administration has been especially vocal on racial equity and in February 2023 published the executive order Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government (EO 14091). The second executive order on this subject, EO 14091, directs federal agencies to address emerging risks to civil rights, including “algorithmic discrimination in automated technology.” It is too soon to know the impact of this new executive order.

Federal agencies with regulatory purview over consumer products are also making adjustments. One leading agency is the Food and Drug Administration (FDA), which has been working to incorporate AI, and specifically machine learning, in medical devices since at least 2019. The FDA now publishes best practices for AI in medical devices, documents commercially available AI-enabled medical devices, and has promised to perform relevant pilots and advance regulatory science in its AI action plan. Aside from the FDA, the Consumer Products Safety Commission (CPSC) stated in 2019 its intention to research and track incidents of AI harms in consumer products, as well as to consider policy interventions including public education campaigns, voluntary standards, mandatory standards, and pursuing recalls. In 2022, CPSC issued a draft report on how to test and evaluate consumer products which incorporate machine learning. Issued in the final days of the Trump administration, the Department of Transportation’s Automated Vehicles Comprehensive Plan sought to remove regulatory requirements for semi- and fully- autonomous vehicles.

In parallel with the uneven state of AI regulatory developments, the U.S. is continuing to invest in infrastructure for mitigating AI risks. Most notable is the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework (RMF), first released as a draft on March 17, 2022, with a final release on January 26, 2023. The NIST AI RMF is a voluntary framework that builds off the Organization for Economic Cooperation and Development’s (OECD) Framework for the Classification of AI Systems by offering comprehensive suggestions on when and how risk can be managed throughout the AI lifecycle. NIST is also developing a new AI RMF Playbook, with concrete examples of how entities can implement the RMF across the data collection, development, deployment, and operation of AI. The NIST AI RMF will also be accompanied by a series of case studies, each of which will document the steps and interventions taken to mitigate risk within a specific AI application.While it is too soon to tell what degree of adoption the NIST AI RMF will achieve, the 2014 NIST Cybersecurity Framework has been widely adapted (usually entailing partial adoption) by industry.

NIST also plays a role in evaluating and publicly reporting on the accuracy and fairness of facial recognition algorithms through its ongoing Face Recognition Vendor Test program. In one analysis, NIST tested and compared 189 commercial facial recognition algorithms for accuracy on different demographic groups, contributing valuable information to the AI marketplace and improving public understanding of these tools.

An assortment of other policy actions addresses some algorithmic harms and contributes to future institutional preparedness and thus warrants mention, even if AI risk is not the primary orientation. Launched in April 2022, the National AI Advisory Committee may play an external advisory role in guiding government policy on managing AI risks in areas such as law enforcement, although it is primarily concerned with advancing AI as a national economic resource. The federal government has also run several pilots of an improved hiring process, aimed at attracting data science talent to the civil service, a key aspect of preparedness for AI governance. Currently, the “data scientist” occupational series is the most relevant federal government job for the technical aspects of AI risk management. However, this role is more oriented towards performing data science than reviewing or auditing AI models created by private sector data scientists.[iv]

The U.S. government first published a national AI Research and Development Strategic Plan in 2016, and in 2022, 13 federal departments funded AI research and development. The National Science Foundation has now funded 19 interdisciplinary AI research institutes, and the academic work coming from some of these institutes is advancing trustworthy and ethical AI methods. Similarly, the Department of Energy was tasked with developing more reliable AI methods which might inform commercial activity, such as in materials discovery. Further, the Biden administration will seek an additional $2.6 billion over six years to fund AI infrastructure under the National AI Research Resource (NAIRR) project, which states that encouraging trustworthy AI is one of its four key goals. Specifically, the NAIRR could be used to better study the risks of emerging large AI models, many of which are currently developed without public scrutiny.

In a significant recent development, a series of states have introduced legislation to tackle algorithmic harms, including California, Connecticut, and Vermont. While these might meaningfully improve AI protections, they could also potentially lead to future pre-emption issues that would mirror the ongoing challenge to passing federal privacy legislation (namely, how should the federal legislation replace or augment various state laws).

The EU Approach to AI Risk Management

The EU’s approach to AI risk management is complex and multifaceted, building on implemented legislation, especially the General Data Protection Regulation (GDPR), and spanning newly enacted legislation, namely the Digital Services Act and Digital Markets Act, as well as legislation still being actively debated, particularly the AI Act, among other relevant endeavors. The EU has consciously developed different regulatory approaches for different digital environments, each with a different degree of emphasis on AI.

“The EU has consciously developed different regulatory approaches for different digital environments, each with a different degree of emphasis on AI.”

Aside from its data privacy implications, GPDR contains two important articles related to algorithmic decision-making. First, GDPR states that algorithmic systems should not be allowed to make significant decisions that affect legal rights without any human supervision. Based on this clause, in 2021, Uber was required to reinstate six drivers who were found to have been fired solely by the company’s algorithmic system. Second, GDPR guarantees an individual’s right to “meaningful information about the logic” of algorithmic systems, at times controversially deemed a “right to explanation.” In practice, companies such as home insurance providers have offered limited responses to requests for information about algorithmic decisions. There are many open questions about this clause, including how often affected individuals request this information, how valuable the information is to them, and what happens when companies refuse to provide the information.

The EU AI Act will be an especially critical component of the EU’s approach to AI risk management in many areas of AI risk. While the AI Act is not yet finalized, enough can be inferred from the European Commission proposal from April 2021, the final Council of the EU proposal from December 2022, and the available information from the ongoing European Parliament discussions in order to analyze its key features.

Although it is often referred to as “horizontal,” the AI Act implements a tiered system of regulatory obligations for a specifically enumerated list of AI applications. Several AI applications, including deepfakes, chatbots, and biometric analysis, must clearly disclose themselves to affected persons. A different set of AI systems with “unacceptable risks” would be banned completely, potentially including AI for social scoring,[v] AI-enabled manipulative technologies, and, with several important exceptions, biometric identification by law enforcement in public spaces.

Between these two tiers sits “high-risk” AI systems, which is the most inclusive and impactful of the designations in the EU AI Act. Two categories of AI applications will be designated as high-risk under the AI Act: regulated consumer products and AI used for impactful socioeconomic decisions. All high-risk AI systems will have to meet standards of data quality, accuracy, robustness, and non-discrimination, while also implementing technical documentation, record-keeping, a risk management system, and human oversight. Entities that sell or deploy covered high-risk AI systems, called providers, will need to meet these requirements and submit documentation that attest to the conformity of their AI systems or otherwise face fines as high as 6% of annual global turnover.

The first category of high-risk AI includes consumer products that are already regulated under the New Legislative Framework, the EU’s single-market regulatory regime, which includes products such as medical devices, vehicles, boats, toys, and elevators. Generally speaking, this means that AI-enabled consumer products will still go through the pre-existing regulatory process under the pertinent product harmonization legislation and will not need a second, independent conformity assessment just for the AI Act requirements. The requirements for high-risk AI systems will be incorporated into the existing product harmonization legislation. As a result, in going through the pre-existing regulatory process, businesses will have to pay more attention to AI systems, reflecting the fact that some modern AI systems may be more opaque, less predictable, or plausibly update after the point of sale.

Notably, some EU agencies have already begun to consider how AI affects their regulatory processes. One leading example is the EU’s Aviation Safety Agency, which first set up an AI taskforce in 2018, published an AI roadmap oriented towards aviation safety in 2020, and released comprehensive guidance for AI that assists humans in aviation systems in 2021.

The second high-risk AI category is comprised of an enumerated list of AI applications that includes impactful private-sector socioeconomic decisions—namely hiring, educational access, financial services access, and worker management—as well as government applications in public benefits, law enforcement, border control, and judicial processes. Unlike consumer products, these AI systems are generally seen as posing new risks and have been, until now, largely unregulated. This means that the EU will need to develop specific AI standards for all of these various use cases (i.e., how accuracy, non-discrimination, risk management, and the other requirements apply to all the various covered AI applications). This is broadly expected to be a very significant implementation challenge, given the number of high-risk AI applications and the novelty of AI standards. The European Commission is expected to rely on the European standards organizations CEN/CENELEC, best evinced by a request to that effect drafted in May 2022. These standards will likely play a huge role in the efficacy and specificity of the AI Act, as meeting them will be the most certain path for companies to attain legal compliance under the AI Act.

Further, companies that sell or deploy high-risk AI systems will have to assert their systems meet these requirements and submit documentation to that effect in the form of a conformity assessment. These companies must also register their systems in an EU-wide database that will be made available to the public, creating significant transparency into the number of high-risk AI systems, as well as into the extent of their societal impact.

“The EU AI Act is not the only major legislation that legislates AI risk. The EU already passed the Digital Services Act (DSA) and Digital Markets Act (DMA), and a future AI Liability Directive may also play an important role.”

Lastly, although not included in the original Commission proposal, the Council of the EU has proposed, and the European Parliament is considering, new regulatory requirements on “general-purpose AI systems,” including large language and large imagery models.[vi] Various definitions are still under consideration, but will likely include large language models, large image models, and large audio models. The regulatory requirements on general-purpose AI could include standards around accuracy, robustness, non-discrimination, and a risk management system.

The EU AI Act is not the only major legislation that legislates AI risk. The EU already passed the Digital Services Act (DSA) and Digital Markets Act (DMA), and a future AI Liability Directive may also play an important role. The DSA, passed in November 2022, considers AI as part of its holistic approach to online platforms and search engines. By creating new transparency requirements, requiring independent audits, and enabling independent research on large platforms, the DSA will reveal much new information about the function and harms of AI in these platforms. Further, the DSA requires large platforms to explain their AI for content recommendations, such as populating news feeds, and to offer users an alternative recommender system not based on sensitive user data. To the extent that these recommender systems contribute to the spread of disinformation, and large platforms fail to mitigate that harm, they may face fines under the DSA.

Similarly, the DMA is broadly aimed at increasing competition in digital marketplaces and considers some AI deployments in that scope. For example, large technology companies deemed to be “gatekeepers” under the law will be barred from self-preferencing their own products and services over third parties, a rule that is certain to affect AI ranking in search engines and ordering of products on E-commerce platforms. The European Commission will also be able to conduct inspections of gatekeeper’s data and AI systems. While the DMA and DSA are not primarily about AI, these laws signal clear willingness by the EU to govern AI built into highly complex systems.

Contrasting the EU and U.S. Approaches to AI Risk Management

The broad narratives above enable some comparisons between the U.S. and EU approaches to AI risks. Both governments espouse largely risk-based approaches to AI regulation and have described similar principles for how trustworthy AI should function. In fact, looking across the principles in the most recent guiding documents in the U.S. (the AIBoR and the NIST AI RMF) and the EU AI Act shows near perfect overlap. All three documents advocate for accuracy and robustness, safety, non-discrimination, security, transparency and accountability, explainability and interpretability, and data privacy, with only minor variations. Further, both the EU and the U.S. expect standards organizations, both government and international bodies, to play a significant role in setting guardrails on AI.

Despite this broad conceptual alignment, there are far more areas of divergence than convergence in AI risk management. The EU’s approach, in aggregate, has far more centrally coordinated and comprehensive regulatory coverage than the U.S., both in terms of including more applications and promulgating more binding rules for each application. Even though U.S. agencies have begun in earnest to write guidelines and consider rulemaking for AI applications within their domains, their ability to enforce these rules remains unclear. U.S. agencies may need to pursue novel litigation, often without explicit legal authority to regulate algorithms, to attempt to effectuate these rules. Generally, the EU regulator will be able to enforce its rulemaking on AI applications with clear investigatory powers and significant fines for non-compliance.

“Despite this broad conceptual alignment, there are far more areas of divergence than convergence in AI risk management.”

EU interventions will also create far more public transparency and information into the role of AI in society, such as through the EU-wide database of high-risk AI systems and the independent researcher access to data from large online platforms. Conversely, the U.S. federal government is investing significantly more funding in AI research, which may contribute to the development of new technologies that mitigate AI risks.

These high-level distinctions are informative, but insufficiently precise to understand how the U.S. and EU approaches may align or misalign in the future. The discussion below, summarized in Table 1, offers a more detailed comparison that independently considers categories of AI applications and relevant policy interventions from the U.S. and the EU.

Table 1. Comparison of EU and U.S. AI risk management by application type

Application Examples EU policy developments U.S. policy developments
AI for human processes/socioeconomic decisions AI in hiring, educational access, and financial services approval GDPR requires human in the loop for significant decisions. High-risk AI applications in Annex III of EU AI Act would need to meet quality standards, implement risk management system, and perform conformity assessment AI Bill of Rights and associated Federal Agency Actions have created patchwork oversight for some of these applications.
AI in consumer products AI in medical devices, partially autonomous vehicles, and planes EU AI Act considers AI implemented within products that are already regulated under EU law to be high risk and further would have new AI standards incorporated into current regulatory process. Individual federal agency adaptations, such as by FDA for medical devices; DOT for automated vehicles; CPSC for consumer products
Chatbots Sales or customer service chatbots on commercial websites EU AI Act would require disclosure that a chatbot is an AI (i.e., not a human). NA
Social media recommender & moderation systems Newsfeeds and group recommendations on TikTok, Twitter, Facebook, or Instagram EU Digital Services Act creates transparency requirement for these AI systems; also enables independent research and analysis NA
Algorithms on e-commerce platforms Algorithms for search or recommendation of products and vendors on Amazon or Shopify EU Digital Markets Act will restrict self-preferencing algorithms in digital markets. Individual anti-trust actions (e.g., against Amazon, and Google Shopping) to reduce self-preferencing in E-commerce algorithms and platform design. NA
Foundations models/generative AI Stability AI’s Stable Diffusion and OpenAI’s GPT-3 Draft proposals of the EU AI Act consider quality and risk management requirements. NA
Facial recognition Clearview AI, PimEyes, Amazon Rekognition EU AI Act will include restrictions on remote facial recognition and biometric identification. EU Data Protection Authorities have fined facial recognition companies under GDPR. NIST’s AI Face Recognition Vendor Test program contributes efficacy and fairness information to the market for facial recognition software.
Targeted advertising Algorithmically targeted advertising on websites and phone applications GDPR has fined Meta for using personal user data for behavioral ads. The Digital Services Act bans targeted advertising to children and certain types of profiling (e.g., by sexual orientation). It requires targeted ads have explanations and users have control over what ads they see. Individual federal agency lawsuits have slightly curtailed some targeted advertising. This includes the DOJ and HUD, who successfully sued Meta for discriminatory housing ads and an FTC penalty against Twitter for using security data for targeted ads.

The EU and U.S. are taking distinct regulatory approaches to AI used for impactful socioeconomic decisions, such as hiring, educational access, and financial services. The EU’s approach has both wider coverage of applications and a broader set of rules for these AI applications. The U.S. approach is more narrowly curtailed to an adaptation of current agency regulatory authority to attempt to handle AI, which is much more limited. Given that many in the U.S. are not expecting comprehensive legislation, some agencies have begun this work in earnest, counterintuitively putting them ahead of many EU agencies. However, EU member state agencies and a potential EU AI board can be expected to catch up, due to a stronger mandate, new authorities, and funding from the EU Act. Uneven authorities between the EU and U.S., as well as an oscillating timeline for AI regulations, may make alignment a significant challenge.

The longstanding trade of physical consumer products between the EU and U.S. may prove helpful for AI regulatory alignment in this context. Many U.S. products already meet more stringent EU product safety rules, in order to access the European market without requiring a differentiating production process. This does not seem likely to be significantly altered by the new EU rules, which will certainly impact commercial products but are unlikely to lead to large changes in the regulatory process or otherwise disallow U.S. companies from meeting EU requirements. Rules for AI built into physical products are very likely to see a “Brussels Effect,” in which trading partners, including the U.S., seek to influence but then eventually adopt EU standards.

Several topics have attracted successful legislative efforts from the EU but not from the U.S. Congress. Most notable is online platforms, including E-commerce, social media, and search engines, that the EU has tackled through the DSA and DMA. There is, at present, no comparable approach in the U.S., nor has the policy conversation been moving towards a clear consensus.

Under the EU AI Act, chatbots would face a disclosure requirement, which is presently absent in the United States. Further, facial recognition technologies will have dedicated rules prescribed by the EU AI Act, although these provisions remain hotly debated. The U.S.’s approach so far has been to contribute to public information through the NIST Face Recognition Vendor Test program, but not to mandate rules.

Similarly, although the European debate over generative AI is new, it is plausible that the EU will include some regulation of these models in the EU AI Act. This could potentially include quality standards, requirements to transfer information to third-party clients, and/or a risk management system for generative AI. At this time, there is no strong evidence that the U.S. plans to execute on any similar steps.

EU-U.S. Collaboration on AI Risk through the Trade and Technology Council

The Trade and Technology Council (TTC) is an EU-U.S. forum for enabling ongoing negotiations and better cooperation on trade and technology policy. The TTC arose after a series of diplomatic improvements between the U.S. and EU, such as by working together on a global minimum corporate tax and resolving tariff disputes on steel, aluminum, and airplanes.

After the first ministerial of the U.S.-EU TTC in September 2021 in Pittsburgh, the inaugural statement included a noteworthy section on AI collaboration in Annex III. The statement acknowledged the risk-oriented approaches of both the EU and the U.S. and committed to three projects under the umbrella of advancing trustworthy AI: (1) discussing measurement and evaluation of trustworthy AI; (2) collaborating on AI technologies designed to protect privacy; and (3) jointly producing an economic study of AI’s impact on the workforce. Since then, all three projects have identified and begun to execute on specific deliverables, resulting in some of the most concrete outcomes of the broader TTC endeavor.

  1. As part of the first project on measurement and evaluation, the TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management was published on December 1, 2022. This roadmap includes three substantive commitments. First, the EU and U.S. will work towards common terminology of trustworthy AI, which is a prerequisite step for alignment of AI risk policies. This will be furthered by building a common knowledge base of metrics and methodologies, including the scientific study of trustworthy AI tools, which might engender some scientific consensus on best practices of AI implementation. The TTC’s collaborative efforts to document tools and methods will likely draw on pre-existing efforts, especially the OECD-NIST Catalogue of AI Tools and Metrics, which has made significant progress in this line of work. This is a valuable project, as a common understanding of the available tools and metrics is critical to operationalizing the shared principles of the U.S. and EU.

    Under the second component of the Joint Roadmap, the EU and U.S. also commit to coordinating their work with international standards bodies on trustworthy AI. This is potentially a reflection of the U.S.’s realization of the key role that EU standards bodies will play in the EU AI Act. Further, the EU recognizes that it will be resource-intensive to develop the many standards it needs for the implementation of the various pieces of legislation that affect AI risk management. A recent report from the European Commission on the AI standards landscape suggests that the EU is expecting to draw from the International Organization for Standardization and International Electrotechnical Commission, international standards bodies that have cooperation agreements with CEN and CENELEC respectively. Further, the same European Commission report notes that they have already begun to examine other AI standards, specifically those from the Institute of Electrical and Electronics Engineers (IEEE). 

    Lastly, the roadmap calls for jointly tracking and categorizing emerging risks of AI, including incidents of demonstrated harms, and working towards compatible evaluations of AI systems. Broadly, these are sensible first steps for building the foundations of alignment on AI risk, although they do not commit to much beyond that.

  2. Under the second project on AI collaboration, the EU and U.S. agreed to develop a pilot project on Privacy-Enhancing Technologies (PETs). Rather than intended to solely increase privacy, PETs are a category of technologies that aim to enable large-scale data analysis, while maintaining some degree of data privacy. PETs, including federated learning, differential privacy, and secure multiparty computation, have been demonstrated to enable broader use of sensitive data from private sector and government sources, related to medical imaging, neighborhood mobility, the effects of social media on democracy, among other examples. Following the third TTC ministerial on December 5, 2022, the EU and U.S. announced an agreement to jointly pilot PETs for health and medicine applications. Although directly oriented around AI risk, in a January 27 addendum to the TTC third ministerial, the EU and U.S. also announced joint research projects on AI for climate forecasting, emergency response, medicine, electric grids, and agriculture.
  3. The deliverable for the third project was also released after the third TTC ministerial: a report on the impact of AI on the workforce, co-written by the European Commission and the White House Council of Economic Advisors. The report highlights a series of challenges, including that AI may displace higher-skill jobs not previously threatened by automation and that AI systems may be discriminatory, biased, or fraudulent in ways that affect labor markets. The report suggests funding appropriate job transition services, adoption of AI that is beneficial for labor markets, and investing in regulatory agencies that ensure AI hiring and algorithmic management practices are fair and transparent.

Emerging Challenges in Transatlantic AI Risk Management

As demonstrated by Table 1, building transatlantic, and more so global, alignment on AI risk management will be an ongoing enterprise that spans a range of digital policy issues. While there are many potential obstacles to transatlantic consensus, the comparison of EU and U.S. approaches to AI elevates several emerging challenges as especially critical.

Most immediately, the emerging rules for impactful socioeconomic decisions are already leading towards significant misalignment. The most obvious reason is that the EU AI Act enables broad regulatory authority coverage over many types of AI systems, universally allowing for rules that enforce the EU’s principles. On the other hand, U.S. federal agencies are largely constrained to adapting existing U.S. law to AI systems. While some agencies have pertinent existing authority—the FTC, CFPB, EEOC, as mentioned, among others—these cover only a subset of the algorithmic principles espoused in the AIBoR and enforced in the EU AI Act. As another example, the U.S. Securities and Exchange Commission may be able to apply a fiduciary duty to financial recommender algorithms, requiring them to promote the best interest of the investor. While potentially a valuable protection, the resulting policy is unlikely to map neatly onto the EU AI Act requirements, even as they are applied more specifically to financial services (a category of high-risk AI applications in the EU AI Act).

It is not yet clear if the promised EU-U.S. collaboration on standards development will significantly mitigate this misalignment. The EU AI Act calls for a wide variety of standards to be produced in a short time, potentially leading to a range of decisions before U.S. regulators have had time to substantively engage on standards development. Further, some U.S. regulators who regulate socioeconomic decisions (e.g., CFPB, SEC, and EEOC, as well as Housing and Urban Development (HUD), and others) may not have worked closely with standards bodies such as NIST or with international standards bodies such as ISO/IEC and IEEE.

Therefore, the potential for misalignment in the regulatory requirements for socioeconomic decisions is quite high. Of course, in order to compete in the EU, U.S. companies may still meet EU standards where domestic requirements are lacking. Whether they follow the EU rules outside the EU significantly depends on whether the cost of meeting EU rules is lower than the cost of differentiation—that is, creating different AI development processes for different geographies. At present, many AI models for socioeconomic decisions are already relatively customized to specific geographies and languages, therefore reducing the imminent harm of conflicting international regulations.

“The EU has passed, and is beginning to implement, the DSA and DMA. These acts have significant implications for AI in social media, E-commerce, and online platforms in general, while the U.S. does not appear yet prepared to legislate on these issues.”

Online platforms present a second significant challenge. The EU has passed, and is beginning to implement, the DSA and DMA. These acts have significant implications for AI in social media, E-commerce, and online platforms in general, while the U.S. does not appear yet prepared to legislate on these issues. This is particularly worrisome, as more digital systems are progressively integrated into platforms, meaning they are more likely to connect many users across international borders. While social media and E-commerce are the most familiar examples, newer iterations include online education websites, job discovery and hiring platforms, securities exchanges, and workplace monitoring software deployed across multinational firms.

“This complex environment raises the potential for future EU-U.S. misalignment, as the EU continues to roll out comprehensive platform governance while U.S. policy developments remain obstructed.”

These newer platforms may use AI that is covered under the high-risk socioeconomic decisions in the EU AI Act and also governed by U.S. federal regulatory agencies. However, the platforms themselves may also be dependent on AI to function, in the form of network algorithms or recommender systems. Most platforms require some algorithm, as displaying the entirety of a platform to all users is typically impossible, and thus some algorithms are necessary to decide what summaries, abstractions, or rankings to show. This creates the significant possibility of a large online platform’s AI systems being governed by both regulations for socioeconomic decision-making (e.g., the EU AI Act and U.S. regulators) and under online platform requirements (e.g., the DSA). It is typically more difficult, though not necessarily impossible, for platforms to operate under several distinct regulatory regimes. This complex environment raises the potential for future EU-U.S. misalignment, as the EU continues to roll out comprehensive platform governance while U.S. policy developments remain obstructed. This environment—high stakes socioeconomic decisions built into algorithmically managed digital platforms—may also be an important test case for governing progressively more complex algorithmic systems. Aside from “exchanging information,” there is no clear path towards closer collaboration on platform policy, or related AI systems, in the TTC.

A third emerging challenge is the shifting nature of AI deployment. New trends include multi-organizational AI development as well as the proliferation of techniques such as edge and federated machine learning.

The process by which AI systems are developed, sometimes referred to as the AI value chain, is becoming more complex. One notable development is the emergence of large AI models, most commonly large language models and large imagery models, being made available over commercial application programming interfaces (API) and public cloud services. That the cutting-edge models are only available via remote access may raise new concerns about how they are integrated, including with fine-tuning, into other software and web applications. Consider a European AI developer that starts with a large language model available over API from a different company based in the U.S., then fine-tunes that model to analyze cover letters of job applicants. This application will be high-risk under the EU AI Act, and the European developer would have to ensure it meets the relevant regulatory standards. However, some required qualities of the AI system, such as robustness or explainability, may be much more difficult to ensure through remote access of the third-party model, especially if it has been developed in a different country under a different regulator regime.

Edge and federated machine learning techniques pose similar challenges. These approaches enable AI models to develop across thousands or millions of devices (e.g., smart phones, smart watches, and AR/VR glasses), while still being individualized to each user and without the moving of personal data. As these AI systems start to touch on more regulated sectors, such as healthcare, there is significant potential for international regulatory conflict.

Policy Recommendations

For both the EU and U.S. governments, a range of domestic and international policy options would aid current and future cooperation and alignment on AI risk management.

The U.S. should prioritize its domestic AI risk management agenda, giving it more focused attention than it has so far received. This includes revisiting the requirements in EO 13859 and mandating federal agencies to meet the requirement to develop AI regulatory plans, thereby leading to a much more comprehensive understanding of domestic AI risk management authority. Using these federal agency regulatory plans, the U.S. should formally review the projected consequences and conflicts of emerging global AI risk management approaches, with a special focus on the U.S.-EU relationship.

The federal agency regulatory plans can also inform what changes are necessary to ensure agencies are able to apply pre-existing law to new AI applications. This may require new staffing capacity, administration subpoena authority, and clarifications or legislative expansions of rulemaking authority to uphold the AI principles espoused in the AIBoR, especially for AI used in impactful socioeconomic decisions.

“By enabling more flexibility, EU regulators will be able to better fine-tune the AI Act requirements to the specific types of high-risk AI applications, likely improving the effectiveness of the act.”

Likewise, the EU has a number of opportunities to aid future cooperation, without weakening its domestic regulatory intentions. One key intervention is to enable more flexibility in the sectoral implementation of the EU AI Act. By enabling more flexibility, EU regulators will be able to better fine-tune the AI Act requirements to the specific types of high-risk AI applications, likely improving the effectiveness of the act. AI rules which could be flexibly tailored to specific applications will better enable future cooperation between the U.S. and the EU, as compared to more homogenous and inflexible rules. In order to do this, the EU will have to carefully manage harmonization so that member state regulators do not implement the high-risk requirements differently—a mechanism for making inclusion decisions (i.e., what specific AI applications are covered) and for adapting details of high-risk requirements could include both member state regulators and the European Commission.

When considering online platforms, the absence of a U.S. legal framework for platform governance makes policy recommendations difficult. The U.S. should work towards a meaningful legal framework for online platform oversight. Further, this framework should consider alignment with EU laws, especially the DSA and the DMA, and consider how misalignment might negatively affect markets and the information ecosystem. In the meantime, the EU and U.S. should include recommender systems and network algorithms—key components of online platforms—when implementing the TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management. Further, the EU should also allow U.S. researchers to collaborate on the studies of very large online platforms that will be enabled by the DSA. If the U.S. does fund the NAIRR as a public resource for large AI model development, it should reciprocally welcome and encourage EU research collaborations.

Although these online platforms and high-risk AI systems demand the most attention, the EU should carefully consider the extraterritorial impact of other aspects of its digital governance, especially those that affect websites and platforms, such as chatbots and new considerations of general-purpose AI. If the EU includes new rules on the function of general-purpose AI, it should be careful to avoid overly broad requirements (such as a general standard of accuracy or robustness) that make little sense for these models and could cause unnecessary splits in the emerging AI value chain marketplace.

“Working together, and by building on the early success of the TTC, the U.S. and EU can deepen their policy collaboration on AI risk management.”

Many of the EU’s upcoming efforts will generate significant new information about the function of important AI systems, as well as the efficacy of its novel attempts at AI governance, and the EU should proactively share this information with the U.S. and other partners. This includes opening its AI standards development process to international stakeholders and the public, as well as ensuring that the resulting standards are available free of charge (which is not currently the case). Further, the EU can make public some of the results of its many information-gathering endeavors, including results from pilot programs on AI auditing, such as those from the European Center for Algorithmic Transparency and the new AI sandboxes.

Working together, and by building on the early success of the TTC, the U.S. and EU can deepen their policy collaboration on AI risk management. Most critically, enabling policy exchanges at the sectorally specific regulator-to-regulator level will build capacity for both governments, while paving easier roads to cooperation. Expanding on the collaborative experimentation with PETs, the EU and U.S. can also consider joint investments in responsible AI research and, even more valuable, open-source tools that better enable responsible AI implementation. Lastly, the EU and U.S. should consider jointly developing a plan for encouraging a transatlantic AI assurance ecosystem, taking inspiration from the United Kingdom’s strategy.

In summary:

  • The U.S. should execute on federal agency AI regulatory plans and use these for designing strategic AI governance with an eye towards EU-U.S. alignment.
  • The EU should create more flexibility in the sectoral implementation of the EU AI Act, improving the law and enabling future EU-U.S. cooperation.
  • The U.S. needs to implement a legal framework for online platform governance, but until then, the EU and U.S. should work on shared documentation of recommender systems and network algorithms, as well as perform collaborative research on online platforms.
  • The U.S. and EU should deepen knowledge sharing on a number of levels, including on standards development; AI sandboxes; large public AI research projects and open-source tools; regulator-to-regulator exchanges; and developing an AI assurance ecosystem.

The EU and U.S. are implementing foundational policies of AI risk management—deepening the crucial collaboration between these governments will help ensure these policies become synergistic pillars of global AI governance.

Appendix: Excluded Categories of AI Risk

This paper does not exhaustively cover all areas of AI risk management but rather focuses on those with the most considerable extraterritorial impact. There are therefore significant absences in this analysis that warrant acknowledgement, including rules and processes for the government use of AI, the impact of AI and automation on the labor market, and related issues, such as data protection.

The government use of AI, such as for allocating public benefits and by law enforcement, is the most notable absence. Despite significant policies in the form of the U.S.’s EO 13960 on trustworthy AI in the federal government and the inclusion of government services in the EU’s AI Act (notably for public benefits, border control, and law enforcement), these are primarily domestic issues. Further, the military use of AI is not included here. While the U.S. is advancing significant policies relevant to AI risks, such as the DOD Directive on Autonomy in Weapons Systems, in Europe, this topic remains under the authority and responsibility of EU member states, rather than EU institutions. Future examinations should consider these policies, especially considering the potential impact of government procurement rules on global AI markets.

The impact of AI on labor markets is also a critical issue, with substantiative effects on labor displacement, productivity, and rising inequality. However, this topic is not primarily treated as a regulatory issue, and while it warrants extensive consideration, it cannot be adequately addressed here. Similarly, while issues of data privacy are often inextricably linked to AI policies, this issue has been extensively covered in other publications from as far back as 1998 until the present day. Lastly, a range of relevant policies in EU member states and in U.S. states have been excluded from this analysis.


notes:

i. An appendix expands on the categories of AI risk management that are not discussed in this paper, including AI use by governments and the military, labor market impacts of AI, and data privacy. (Back to top)

ii. These agencies are Departments of Energy, Health and Human Services, and Veteran Affairs, as well as the Environmental Protection Agency and the U.S. Agency for International Development. (Back to top)

iii. In its five principles, the AIBoR calls for “safe and effective” AI, insists on “notice and explanation” to affected persons, with strong “algorithmic discrimination protections.” Further, the AIBoR says AI must respect data privacy and offer human alternatives or fallback that can override AI decisions. (Back to top)

iv. In fact, this occupational series was created under requirements in the Foundations for Evidence-Based Policymaking Act, which Congress passed in January 2019 and is oriented towards improving government use of data and empirical evidence. (Back to top)

v. While the original AI Act proposed by the European Commission would only ban social scoring by governments, the European Council and EU Parliament is considering including commercial social scoring. While restricting government social scoring may primarily be a signal of opposition to authoritarian use of AI, the application of the restriction to private companies may be more impactful. Although the phrasing is nebulous, it could ban, for instance, the analysis of customer’s social media posts, food delivery orders, or online reviews in order to make decisions about eligibility for business services, such as product returns. (Back to top)

vi. This category of AI models has significant overlap with the terms “foundation models” as well as with “generative AI.” (Back to top)

  • Acknowledgements and disclosures

    Google, Amazon, and Meta (formerly Facebook) are general unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and not influenced by any donation.

  • Footnotes
    1. ”Executive Order 13859 of February 11, 2019, Maintaining American Leadership in Artificial Intelligence,” Federal Register, 84 FR 3967 (January 14th, 2019): 3967-3972, https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence; Office of Management and Budget. Guidance for Regulation of Artificial Intelligence Applications, by Russell T. Vought, (Washington, D.C. 2020) https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf
    2. Will Knight, “White House Favors a Light Touch in Regulating AI,” Wired, January 7, 2020. https://www.wired.com/story/white-house-favors-light-touch-regulating-ai/ (accessed January 15, 2023); Alex Engler, “New White House guidance downplays important AI harms,” The Brookings Institution, December 9th, 2020. https://www.brookings.edu/blog/techtank/2020/12/08/new-white-house-guidance-downplays-important-ai-harms/
    3. Christie Lawrence, Isaac Cui, and Daniel E. Ho, Implementation Challenges to Three Pillars of America’s AI Strategy (Stanford: Stanford RegLab and Stanford University Center for Human-Centered AI, 2022) https://hai.stanford.edu/sites/default/files/2022-12/HAIRegLab%20White%20Paper%20-%20Implementation%20Challenges%20to%20Three%20Pillars%20of%20America%E2%80%99s%20AI%20Strategy.pdf
    4. Department of Health and Human Services, OMB M-21-06 (Guidance for Regulation of Artificial Intelligence Applications) (Washington D.C., 2021). https://www.hhs.gov/sites/default/files/department-of-health-and-human-services-omb-m-21-06.pdf
    5. The White House, Blueprint for an AI Bill of Rights (Washington, D.C., 2022) https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf
    6. Alex Engler, “The AI Bill of Rights makes uneven progress on algorithmic protections,” The Brookings Institution, February 9, 2023. https://www.brookings.edu/2022/11/21/the-ai-bill-of-rights-makes-uneven-progress-on-algorithmic-protections/
    7. The Federal Trade Commission, Aiming for truth, fairness, and equity in your company’s use of AI (Washington D.C., 2021). https://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai; The Federal Trade Commission, Keep your AI claims in check (Washington D.C., February 27, 2023) https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check; Kaye, Kate, “The FTC’s new enforcement weapon spells death for algorithms.” Protocol, March 14, 2022. https://www.protocol.com/policy/ftc-algorithm-destroy-data-privacy
    8. The Federal Trade Commission, Commercial Surveillance and Data Security Rulemaking (Washington D.C., August 11, 2022). https://www.ftc.gov/legal-library/browse/federal-register-notices/commercial-surveillance-data-security-rulemaking; Chris Baumohl, Suzanne Bernstein, Alan Butler, John Davisson, Caitriona Fitzgerald, Grant Fergusson, Christopher Frascella, Sara Geoghegan, Calli Schroeder, and Ben Winters, “Disrupting Data Abuse: Protecting Consumers from Commercial Surveillance in the Online Ecosystem,” Electronic Privacy Information Center (November 2022). https://epic.org/ftc-rulemaking-on-commercial-surveillance-data-security/; Lydia X. Z. Brown, Andrew Crawford, Nick Doty, Matt Scherer, Ridhi Shetty, Cody Venzke, Michael Yang, Elizabeth Laird, Eric Null, and George Slover, “CDT Comments to FTC Regarding Prevalent Commercial Surveillance Practices that Harm Consumers” Center for Democracy and Technology. (November 21, 2022) https://cdt.org/wp-content/uploads/2022/11/CDT-Comments-to-FTC-on-ANPR-R111004.pdf
    9. Alex Engler, “The EEOC wants to make AI hiring fairer for people with disabilities” The Brookings Institution, May 26, 2022. https://www.brookings.edu/blog/techtank/2022/05/26/the-eeoc-wants-to-make-ai-hiring-fairer-for-people-with-disabilities/
    10. Consumer Financial Protection Board, CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms (Washington D.C., 2022). https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-models-using-complex-algorithms/; Consumer Financial Protection Board, Consumer Financial Protection Circular 2022-03 (Washington, D.C., 2022) https://www.consumerfinance.gov/compliance/circulars/circular-2022-03-adverse-action-notification-requirements-in-connection-with-credit-decisions-based-on-complex-algorithms
    11. The White House, FACT SHEET: President Biden Signs Executive Order to Strengthen Racial Equity and Support for Underserved Communities Across the Federal Government. (Washington D.C., February 2023) https://www.whitehouse.gov/briefing-room/statements-releases/2023/02/16/fact-sheet-president-biden-signs-executive-order-to-strengthen-racial-equity-and-support-for-underserved-communities-across-the-federal-government/
    12. The Food and Drug Administration, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) (Washington D.C., 2019) https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf
    13. The Food and Drug Administration, Good Machine Learning Practice for Medical Device Development: Guiding Principles (Washington D.C., 2021) https://www.fda.gov/medical-devices/software-medical-device-samd/good-machine-learning-practice-medical-device-development-guiding-principles; The Food and Drug Administration, Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices (Washington D.C., 2022) https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices; The Food and Drug Administration. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan (Washington D.C., 2021) https://www.fda.gov/media/145022/download
    14. Nevin J. Taylor, “Artificial Intelligence and Machine Learning In Consumer Products” Consumer Product Safety Commission, May 19, 2021. https://www.cpsc.gov/s3fs-public/Artificial%20Intelligence%20and%20Machine%20Learning%20In%20Consumer%20Products.pdf
    15. Nevin J. Taylor, “Applied Artificial Intelligence and Machine Learning Test and Evaluation Program for Consumer Products” Consumer Product Safety Commission, August 24, 2022. https://www.cpsc.gov/s3fs-public/20220824AppliedAI_ML_TE_Report.pdf?VersionId=fB_QR1ofs1pAuG8qSwebOw2xSTbxtRUq
    16. Department of Transportation, Automated Vechicles Comprehensive Plan (Washington D.C., 2021) https://www.transportation.gov/sites/dot.gov/files/2021-01/USDOT_AVCP.pdf
    17. National Institute of Standards and Technology, AI Risk Management Framework: Initial Draft (Washington D.C., 2022) https://www.nist.gov/system/files/documents/2022/03/17/AI-RMF-1stdraft.pdf; National Institute of Standards and Technology. AI Risk Management Framework. (Washington D.C., 2023) https://www.nist.gov/itl/ai-risk-management-framework 
    18. The Organization for Economic Cooperation and Development, OECD Framework for the Classification of AI systems (Paris, 2022) https://www.oecd-ilibrary.org/science-and-technology/oecd-framework-for-the-classification-of-ai-systems_cb6d9eca-en;jsessionid=JnM7ps9vYAqQ3YkywM0NsM-gKbwUnxz1ASaaST0Y.ip-10-240-5-120
    19. National Institute of Standards and Technology, NIST AI Risk Management Framework Playbook (Washington D.C., 2023) https://pages.nist.gov/AIRMF/
    20. National Institute of Standards and Technology, Roadmap for the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) (Washington D.C., 2023) https://www.nist.gov/itl/ai-risk-management-framework/roadmap-nist-artificial-intelligence-risk-management-framework-ai
    21. Tenable, “NIST Cybersecurity Framework Adoption Linked to Higher Security Confidence According to New Research from Tenable Network Security” (March 29, 2016) https://www.tenable.com/press-releases/nist-cybersecurity-framework-adoption-linked-to-higher-security-confidence-according)
    22. National Institute of Standards and Technology, Face Recognition Vendor Test (FRVT) (Washington D.C., November 30, 2020). https://www.nist.gov/programs-projects/face-recognition-vendor-test-frvt
    23. Patrick Grother, Mei Ngan, and Kayee Hanaoka, “Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects.” National Institute of Standards and Technology. https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf
    24. The National Artificial Intelligence Initiative Office, THE NATIONAL AI ADVISORY COMMITTEE (NAIAC) (Washington D.C., 2022) https://www.ai.gov/naiac/#SUBCOMMITTEE_ON_AI_AND_LAW_ENFORCEMENT_NAIAC-LE
    25. Dan Morgan, “Update on Government-wide Data Scientist Hiring Pilot” The Federal Chief Data Officers Council. June 11, 2021. https://www.cdo.gov/news/data-scientist-hiring-pilot/; Dave Nyczepir, “State Department launching new assessment-based recruitment process for data scientists,” Fedscoop, March 25, 2022. https://fedscoop.com/state-department-smeqa-process/
    26. Office of Personnel Management, Position Classification Flysheet for Data Science Series, 1560. (Washington D.C., 2022) https://www.opm.gov/policy-data-oversight/classification-qualifications/classifying-general-schedule-positions/standards/1500/gs1560.pdf
    27. National Science and Technology Council Networking and Information Technology Research and Development Subcommittee. National AI Research and Development Strategic Plan (Washington D.C., 2016) https://www.nitrd.gov/pubs/national_ai_rd_strategic_plan.pdf
    28. The National Science Foundation, NSF-led National AI Research Institutes. https://www.nsf.gov/news/ai/AI_map_interactive.pdf; NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography. “Focus 1: Foundational Research in Trustworthy Artificial Intelligence / Machine Learning”
    29. U.S. Congress, House, National Artificial Intelligence Initiative Act of 2020, H.R.6216, 116th Congress. https://www.congress.gov/bill/116th-congress/house-bill/6216; The Department of Energy. Artificial Intelligence and Machine Learning. (Washington, D.C.) https://www.energy.gov/science/artificial-intelligence-and-machine-learning
    30. The National Artificial Intelligence Initiative Office, Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem: An Implementation Plan for a National Artificial Intelligence Research Resource (Washington D.C., 2023) https://www.ai.gov/wp-content/uploads/2023/01/NAIRR-TF-Final-Report-2023.pdf
    31. California Assembly Member Bauer-Kahan. An Act relating to artificial intelligence. (Sacramento, January 30, 2023). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB331; Alison Cross, “CT government AI use is extensive, raising equity and privacy concerns. Here’s what a proposed bill would do.” The Hartford Courant, March 4, 2023. https://www.courant.com/2023/03/04/ct-governments-ai-use-is-already-extensive-raising-equity-and-privacy-concerns-a-proposed-bill-would-add-oversight/; Vermont General Assembly. An act relating to restricting electronic monitoring of employees and employment-related automated decision systems. (2023). https://legislature.vermont.gov/bill/status/2024/H.114
    32. Sorelle Friedler, Suresh Venkatasubramanian, and Alex Engler, “How California and other states are tackling AI legislation.” The Brookings Institution, March 22, 2023. https://www.brookings.edu/blog/techtank/2023/03/22/how-california-and-other-states-are-tackling-ai-legislation/; Cam Kerry, “Will California be the death of national privacy legislation?” The Brookings Institution, November 18, 2022. https://www.brookings.edu/blog/techtank/2022/11/18/will-california-be-the-death-of-national-privacy-legislation/
    33. The European Commission, Can I be subject to automated individual decision-making, including profiling? https://commission.europa.eu/law/law-topic/data-protection/reform/rights-citizens/my-rights/can-i-be-subject-automated-individual-decision-making-including-profiling_en
    34. Natasha Lomas, “Uber hit with default ‘robo-firing’ ruling after another EU labor rights GDPR challenge” TechCrunch, March 30, 2021. https://techcrunch.com/2021/04/14/uber-hit-with-default-robo-firing-ruling-after-another-eu-labor-rights-gdpr-challenge/
    35. Andrew Selbst and Julia Powles, “Meaningful information and the right to explanation.” International Data Privacy Law. November 2017. https://academic.oup.com/idpl/article/7/4/233/4762325
    36. Jacob Dexe, Ulrik Franke, Kasia Söderlund, Niels van Berkel, Rikke Hagensby Jensen, Nea Lepinkäinen and Juho Vaiste, “Explaining automated decision-making: a multinational study of the GDPR right to meaningful information” The Geneva Papers on Risk and Insurance – Issues and Practice. May 3, 2022. https://link.springer.com/article/10.1057/s41288-022-00271-9
    37. Steve Wood, “Data Protection – Regulatory action and recent case law from the EU and UK courts: the emerging direction for GDPR regulation” JDSupra, March 9, 2022. https://www.jdsupra.com/legalnews/data-subject-rights-under-gdpr-4731566/
    38. Lilian Edwards, “Expert explainer: The EU AI Act proposal” The Ada Lovelace Institute. https://www.adalovelaceinstitute.org/resource/eu-ai-act-explainer/ 
    39. Joshua P. Meltzer and Aaron Tielemans, “The European Union AI Act: Next steps and issues for building international cooperation in AI” The Brookings Institution (June 1, 2022) https://www.brookings.edu/research/the-european-union-ai-act-next-steps-and-issues-for-building-international-cooperation-in-ai/
    40. Michael Veale and Frederik Zuiderveen Borgesius, “Demystifying the Draft EU Artificial Intelligence Act” Computer Law Review International (November 26, 2021) https://osf.io/preprints/socarxiv/38p5f
    41. European Union Aviation Safety Agency. Artificial Intelligence Roadmap: A human-centric approach to AI in aviation (Brussels, 2020) https://www.easa.europa.eu/en/downloads/109668/en; European Union Aviation Safety Agency. EASA Concept Paper: First usable guidance for Level 1 machine learning applications (Brussels, 2021)
    42. Hadrien Pouget, “The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist” Lawfare, January 12, 2023. https://www.lawfareblog.com/eus-ai-act-barreling-toward-ai-standards-do-not-exist
    43. European Commission. Draft standardisation request to the European Standardisation Organisations in support of safe and trustworthy artificial intelligence (Brussels 2022) https://ec.europa.eu/docsroom/documents/52376
    44. European Commission, AI Watch: Artificial Intelligence Standardisation Landscape Update (Brussels, 2023). https://publications.jrc.ec.europa.eu/repository/handle/JRC131155
    45. Andrea Renda and Alex Engler, “Reconciling the AI Value Chain with the EU’s Artificial Intelligence Act” The Center for European Policy Studies. https://www.ceps.eu/ceps-publications/reconciling-the-ai-value-chain-with-the-eus-artificial-intelligence-act/
    46. The European Commission, Disinformation: Commission welcomes the new stronger and more comprehensive Code of Practice on disinformation. (Brussels, June 2022). https://ec.europa.eu/commission/presscorner/detail/en/IP_22_3664
    47. Christophe Carugati, “How to implement the self-preferencing ban in the European Union’s Digital Markets Act” Bruegel, December 2, 2022. https://www.bruegel.org/policy-brief/how-implement-self-preferencing-ban-european-unions-digital-markets-act
    48. National Institute of Standards and Technology and the European Commission, Crosswalk: An illustration of how NIST AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, Proposed EU AI Act, Executive Order 13960, and Blueprint for an AI Bill of Rights. (Washington D.C., 2023) https://www.nist.gov/system/files/documents/2023/01/26/crosswalk_AI_RMF_1_0_OECD_EO_AIA_BoR.pdf
    49. Anu Bradford, The Brussels Effect: How the European Union Rules the World, Oxford Academic, 19 December 19, 2019. https://academic.oup.com/book/36491
    50. Alex Engler, “The EU AI Act will have global impact, but a limited Brussels Effect,” The Brookings Institution. June 8, 2022. https://www.brookings.edu/research/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/
    51. Luca Bertuzzi, “AI Act: EU Parliament’s discussions heat up over facial recognition, scope” Euractiv, October 6, 2022.  https://www.euractiv.com/section/digital/news/ai-act-eu-parliaments-discussions-heat-up-over-facial-recognition-scope/
    52. Dan Hamilton, “Getting to Yes: Making the U.S.-EU Trade and Technology Council Effective,” Johns Hopkins University Transatlantic Leadership Network. https://www.transatlantic.org/wp-content/uploads/2022/03/TTC-summary-brief-final-March-6-2022.pdf
    53. The White House, U.S.-EU Trade and Technology Council Inaugural Joint Statement (Washington D.C., 2022) https://www.whitehouse.gov/briefing-room/statements-releases/2021/09/29/u-s-eu-trade-and-technology-council-inaugural-joint-statement/
    54. National Institute of Standards and Technology and the European Commission, TTC Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management. (Washington D.C., 2022) https://www.nist.gov/system/files/documents/2022/12/04/Joint_TTC_Roadmap_Dec2022_Final.pdf
    55. The Organization for Economic Cooperation and Development. OECD-NIST Catalogue of AI Tools & Metrics (Paris, 2023) https://oecd.ai/en/catalogue/faq
    56. European Commission. AI Standardisation Landscape Update (Brussels, 2023) https://publications.jrc.ec.europa.eu/repository/handle/JRC131155
    57. Andrew Foote, Ashwin Machanavajjhala, and Kevin McKinney, “Releasing Earnings Distributions using Differential Privacy: Disclosure Avoidance System For Post Secondary Employment Outcomes (PSEO)” Working Papers 19-13, Center for Economic Studies, U.S. Census Bureau. https://ideas.repec.org/p/cen/wpaper/19-13.html; Alexander Ziller, Dmitrii Usynin, Rickmer Braren, Marcus Makowski, Daniel Rueckert and Georgios Kaissis, “Medical imaging deep learning with differential privacy,” Nature Scientific Reports (June 29, 2021) https://www.nature.com/articles/s41598-021-93030-0; Google. “See how your community moved differently due to COVID-19” (October 17, 2022) https://www.google.com/covid19/mobility/; Solomon Messing, Christina DeGregorio, Bennett Hillenbrand, Gary King, Saurav Mahanti, Zagreb Mukerjee, Chaya Nayak; Nate Persily, Bogdan State, and Arjun Wilkins, “Facebook Privacy-Protected Full URLs Data Set” Harvard Dataverse (2020) https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/TDOAPG
    58. The White House, FACT SHEET: U.S.-EU Trade and Technology Council Advances Concrete Action on Transatlantic Cooperation (Washington D.C., 2022) https://www.whitehouse.gov/briefing-room/statements-releases/2022/12/05/fact-sheet-u-s-eu-trade-and-technology-council-advances-concrete-action-on-transatlantic-cooperation/
    59. The White House, Statement by National Security Advisor Jake Sullivan on the New U.S.-EU Artificial Intelligence Collaboration (Washington D.C., 2023) https://www.whitehouse.gov/briefing-room/statements-releases/2023/01/27/statement-by-national-security-advisor-jake-sullivan-on-the-new-u-s-eu-artificial-intelligence-collaboration/
    60. European Commission and the White House Council of Economic Advisors, The Impact of Artificial Intelligence on the Future of Workforces in the European Union and the United States of America (Washington D.C., 2023) https://www.whitehouse.gov/wp-content/uploads/2022/12/TTC-EC-CEA-AI-Report-12052022-1.pdf
    61. Lee Reiners, “Regulation of Robo-advisory Services” FinTech Law and Regulation (September 2, 2019) https://www.elgaronline.com/display/edcoll/9781788979016/29_chapter16.xhtml
    62. Charlotte Siegmann and Markus Anderljung, “The Brussels Effect and Artificial Intelligence” (August 16, 2022) https://www.governance.ai/research-paper/brussels-effect-ai
    63. Imran Uddin, Ali Shariq Imram, Khan Muhammad, Nosheen Fayyaz, and Muhammed Sajjad, “A Systematic Mapping Review on MOOC Recommender Systems” IEEE. (2015) https://ieeexplore.ieee.org/document/9501040; Qi Guo, “The AI Behind LinkedIn Recruiter search and recommendation systems” LinkedIn Engineering Blog (April 22, 2019) https://engineering.linkedin.com/blog/2019/04/ai-behind-linkedin-recruiter-search-and-recommendation-systems; Halah Youryalai, “Forget $10 Trades, Meet Robinhood: New Brokerage Targets Millennials With Little Cash,” Forbes, February 26, 2014. https://www.forbes.com/sites/halahtouryalai/2014/02/26/forget-10-trades-meet-robinhood-new-brokerage-targets-millennials-with-little-cash/?sh=4420d7b87f48; Jodi Kantor and Arya Sundaram, “The Rise of the Worker Productivity Score,” New York Times, August 14, 2022 https://www.nytimes.com/interactive/2022/08/14/business/worker-productivity-tracking.html
    64. The White House, FACT SHEET: U.S.-EU Trade and Technology Council Advances Concrete Action on Transatlantic Cooperation (Washington D.C., 2022) https://www.whitehouse.gov/briefing-room/statements-releases/2022/12/05/fact-sheet-u-s-eu-trade-and-technology-council-advances-concrete-action-on-transatlantic-cooperation/
    65. Charlotte Stanton, Vivien Lung, Nancy Zhang, Minori Ito, Steve Weber, and Katherine Charlet, “What the Machine Learning Value Chain Means for Geopolitics,” Carnegie Endowment for International Peace, August 5th, 2019 https://carnegieendowment.org/2019/08/05/what-machine-learning-value-chain-means-for-geopolitics-pub-79631
    66. Matthias Paulik, Matt Seigel, Henry Mason, Dominic Telaar, Joris Kluivers, Rogier van Dalen, Chi Wai Lau, Luke Carlson, Filip Granqvist, Chris Vandevelde, Sudeep Agarwal, Julien Freudiger, Andrew Byde, Abhishek Bhowmick, Gaurav Kapoor, Si Beaumont, Áine Cahill, Dominic Hughes, Omid Javidbakht, Fei Dong, Rehan Rishi and Stanley Hung, “Federated Evaluation and Tuning for On-Device Personalization: System Design & Applications,” Apple, February 2022. https://machinelearning.apple.com/research/federated-personalization
    67. Andrea Renda and Alex Engler, “What’s in a Name: Getting the definition of Artificial Intelligence right in the EU’s AI Act” Center for European Policy Studies, February 2023. https://www.ceps.eu/ceps-publications/whats-in-a-name/
    68. Alex Engler, “A Bold Transatlantic Plan to Open Corporate Databases,” Center for European Policy Analysis, July 8, 2021. https://cepa.org/article/a-bold-transatlantic-plan-to-open-corporate-databases/
    69. Maximilian Gahntz and Claire Pershan, ‘How the EU Can Take on “general-purpose AI” in the AI Act,’ Mozilla, November 9, 2022 https://foundation.mozilla.org/en/blog/how-the-eu-can-take-on-general-purpose-ai-in-the-ai-act/
    70. Alex Engler, “Early thoughts on regulating generative AI like ChatGPT,” The Brookings Institution. February 21, 2023. https://www.brookings.edu/blog/techtank/2023/02/21/early-thoughts-on-regulating-generative-ai-like-chatgpt/
    71. CENELEC, The General Court of the EU affirms copyright of harmonized standards. (Brussels, 2021) https://www.cencenelec.eu/news-and-events/news/2021/briefnews/2021-07-29-general-court-of-eu-affirms-copyright-of-harmonized-standards/
    72. European Centre for Algorithmic Transparency, Towards a safer, more predictable and trusted online environment (Brussels, 2023) https://algorithmic-transparency.ec.europa.eu/index_en; European Commission. Launch event for the Spanish Regulatory Sandbox on Artificial Intelligence (Brussels, 2022) https://digital-strategy.ec.europa.eu/en/events/launch-event-spanish-regulatory-sandbox-artificial-intelligence
    73. Centre for Data Ethics and Innovation, “The roadmap to an effective AI assurance ecosystem.” (London, December 8, 2021)  https://www.gov.uk/government/publications/the-roadmap-to-an-effective-ai-assurance-ecosystem
    74. “Executive Order of 13960 of December 3, 2020 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government” Federal Register, 85 FR 78939 (December 2, 2020): 78939-78943, https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government
    75. The Department of Defense, Autonomy in Weapon Systems. (Washington D.C., 2023). https://media.defense.gov/2023/Jan/25/2003149928/-1/-1/0/DOD-DIRECTIVE-3000.09-AUTONOMY-IN-WEAPON-SYSTEMS.PDF
    76. World Economic Forum, “AI Procurement in a Box: AI Government Procurement Guidelines” (2020) https://www3.weforum.org/docs/WEF_AI_Procurement_in_a_Box_AI_Government_Procurement_Guidelines_2020.pdf
    77. Carl Benedikt Frey, The Technology Trap: Capital, Labour and Power in the Age of Automation (Princeton University Press, Princeton, 2019); Daron Acemoglu and Pascual Restrepo, “Artificial Intelligence, Automation, and Work,” University of Chicago Press https://www.nber.org/system/files/chapters/c14027/c14027.pdf; Daron Acemoglu and Pascual Restrepo, “Tasks, Automation, and the Rise in US Wage Inequality,” Econometrica (October 14 2022) https://www.nber.org/papers/w28920
    78. Peter P. Swire and Robert E. Litan, “Avoiding a Showdown Over EU Privacy Laws,” The Brookings Institution (February 1, 1988) https://www.brookings.edu/research/avoiding-a-showdown-over-eu-privacy-laws/; The Congressional Research Service, The EU-U.S. Data Privacy Framework: Background, Implementation, and Next Steps. (Washington D.C., 2022) https://crsreports.congress.gov/product/pdf/LSB/LSB10846