Artificial intelligence caught the attention of a few governments a decade ago. It has become a preoccupation for many since the watershed 2022 release of ChatGPT.1 In turn, that development set off a tsunami of policy initiatives across many national governments, most multilateral organizations, and diverse evolving groups and ad-hoc coalitions. These efforts seek both to realize the opportunities for AI to expand the frontiers of science and human capabilities and contribute to productivity and creativity, and to identify and mitigate risks that AI presents to humans and society.
The multiplicity of efforts and paths can seem confusing, incoherent, or conflicting. At the international level, AI governance includes frameworks and principles like AI ethics recommendations adopted by the OECD and UNESCO,2 and since then has expanded to include strategy documents from many national governments and a continental strategy from the African Union;3 additional “soft law” efforts and voluntary codes of conduct by national governments and multilateral organizations like the G7 and ASEAN;4 legislation from the European Union regulating “high-risk” AI applications and general purpose systems;5 a network of “safety institutes” springing from the Bletchley conference convened by the U.K. government;6 reports on AI science and safety by international expert groups assembled by the UN Secretariat and the U.K. government;7 wider attention to the work of international standards development organizations on AI;8 and, in 2024, a binding framework convention from the Council of Europe signed by the 46 member states and 36 others, and a Global Digital Compact (GDC) adopted by the 2024 UN General Assembly that initiates a UN scientific panel and exploration of a UN role in AI governance and funding capacity-building for development.9
This headlong rush has prompted some voices to urge some kind of global structure for AI governance, in effect an international traffic cop to bring order and consistency to these diverse efforts and international groupings. Such a centralized approach is most prominently exemplified by UN Secretary General Antonio Guterres and OpenAI CEO Sam Altman and his co-founders Greg Brockman and Ilya Sutskever, who all have called for regulation of foundational AI by a body similar to the International Atomic Energy Agency (IAEA).10 The website for the February 2025 AI Action Summit in Paris describes current global governance of AI as “piecemeal” with “no unified governance …,” though it concludes “one single governance initiative is not the answer.”11
We see efforts to consolidate international AI governance as premature and ill-suited to respond to the immense, complex, novel, challenges of governing advanced AI, and the current diverse and decentralized efforts as beneficial and the best fit for this complex and rapidly developing technology.
Exploring the vast terra incognita of AI, realizing its opportunities, and managing its risks requires governance that can adapt and respond rapidly to AI risks as they emerge, develop deep understanding of the technology and its implications, and mobilize diverse resources and initiatives to address the growing global demand for access to AI. No one government or body will have the capacity to take on these challenges without building multiple coalitions and working closely with experts and institutions in industry, philanthropy, civil society, and the academy.
A distributed network of networks can more effectively address the challenges and opportunities of AI governance than a centralized system. Like the architecture of the interconnected information technology systems on which AI depends, such a decentralized system can bring to bear redundancy, resiliency, and diversity by channeling the functions of AI governance toward the most timely and effective pathways in iterative and diversified processes, providing agility against setbacks or failures at any single point. These multiple centers of effort can harness the benefit of network effects and parallel processing.
We explore this model of distributed and iterative AI governance below. First, we describe the unique and multifaceted challenges that evolving foundational AI presents for governance. Then, we show how global infrastructure and investment align with AI development and deployment, shaping networks and hubs of AI activity. Third, we examine the proliferating landscape of AI governance initiatives and key functions that these perform in addressing opportunities and risks and understanding AI. We conclude by looking at how distributed governance can adapt to address future needs for AI governance.
The governance challenges of foundational AI
Today’s advanced AI models are unlike previous technologies and systems in their breadth, complexity, and versatility. First and foremost, generative AI is a general-purpose tool with a wide range of applications that are continuing to be determined. The scale of the compute and layers of machine learning involved make it difficult to comprehend the full scope of AI model capabilities and applications. Even though large generative AI models have undergone extensive testing and alignment to elicit capabilities and develop guardrails, the expert panel convened by the U.K. wrote in its final International AI Safety report that, “Developers still understand little about how their general-purpose AI models operate. This lack of understanding makes it more difficult both to predict behavioural issues and to explain and resolve known issues once they are observed.”12 For a notable example, the quality of computer coding by large language models was a surprise to their developers.13
The rapid pace of development adds to these uncertainties. In the two-plus years since the first wave of generative AI was released to the public, the models have grown in scale and are developing new learning techniques. Figure 1 below shows the growth in training compute power among major generative models in recent years to enable faster computation on greater volumes of training data.14

The models present moving targets because they are developing rapidly through breakthroughs in machine learning, continuous learning, and additional data.15 Not only are their capabilities unknown and expanding as they are released, but these can grow and change after release through reinforced learning and additional training by third parties using tailored datasets.16 These uncertainties make it difficult to pin down where in the value chain of development and development risk can be introduced or identified. Due to these compound unknowns, understanding the use cases and misuses of generative AI models will require extensive and continuous monitoring by developers, deployers, policymakers, and experts from civil society and academia. This will require governance that is deeply informed, agile, and adaptive.
The scale and complexity of frontier AI means that, at least through the end of 2024, only a small number of nations and large tech companies have had the financial capacity and data to develop foundational AI models.17 From an AI governance perspective, the complexity adds to the challenge of understanding the capabilities and risks of newly introduced models and measuring their performance and assessing compliance with applicable rules, standards, and voluntary codes of conduct. It necessitates close and continuous collaboration with industry and experts for AI governance.
Despite the uncertainties of frontier AI, a number of applications and risks are evident. Most prominently, these include bias and discrimination, false or malicious content, augmented cyberattacks, and magnified privacy intrusions, as well as collateral effects of AI on energy consumption, competition, intellectual property, labor markets, or systemic effects. Responses to these will come in significant part at the national level and will depend on the use case, the effects, and applicability of existing mechanisms in laws or otherwise. The responses will also benefit from international cooperation based on shared interests to achieve critical mass and align approaches.
Another known use case that gives rise to governance challenges is AI’s dual-use capabilities that pair its beneficial applications with potentially destructive ones. To take one prominent example, the AI-driven protein folding for drug discoveries, which was awarded a Nobel Prize, can also be used to develop bioweapons.18 These dual-use characteristics distinguish AI both from general-purpose electricity, which did not create immediate national security risks, and from nuclear energy, which was not a broad general-purpose technology. The goals of maximizing economically and socially beneficial deployment of AI while controlling AI uses with national security implications brings national security considerations into the governance discussion, further complicating the approach to AI governance.
The national security dimensions of generative AI and its breadth and potential also make geopolitics a significant factor in the global governance landscape. The U.S., China, and other powers see AI as strategically important to both competitiveness and national security and therefore aim for leadership in AI development, making technology like AI central to competition between China and G7 countries and their allies, and to the geopolitical aspirations of middle powers like India and Brazil. A result is that like-minded economic and national security alliances are positioned to achieve the deepest cooperation on AI governance. AI governance mechanisms that include a broader range of countries will be constrained by narrower common ground. While the U.S. and China can engage on AI governance, the opportunities for deep cooperation in bodies such as the UN will be limited. Each supported the other’s parallel UN General Assembly (UNGA) resolutions, for example, but their bilateral cooperation on AI has been tentative and narrow, focused most concretely on avoiding unintended conflict from autonomous weapons systems.19
The geopolitics surrounding AI also cut across the issues of equity and access for low-to-moderate income (LMI) countries. The growing importance of access to AI for development underpins the GDC and complements the UN Sustainable Development Goals.20 For leading AI developers in the private sector, LMI countries present commercial opportunities; for the governments of leading AI countries, they also present geopolitical opportunities, as reflected by those U.S. and China UNGA resolutions. For the most part, the LMIs are eager to participate in the promise of emerging AI, which injects competition as an element of support for LMI development and approaches to AI governance by the U.S., China, and other geopolitical players.
The breadth of issues presented by AI will stretch the time needed to develop the consensus and coalitions that will be needed to arrive at more coherent international governance. While certain issues will be more definable and tractable, a diverse range of geopolitical, economic, and developmental interests among governments will shape choices of forums and partners, while others will need time to ripen. To juggle the multiple and rapidly evolving attributes of general-purpose AI technology will require varying coalitions and timetables. Developing such coalitions takes working outward from a central core to build critical mass in concentric circles; for example, the critical path to guardrails for military use of AI runs through the U.S. and China because, without agreement between these two, no broader consensus is possible. All this underscores that international governance of AI will need to be multifaceted, multilayered, and multistakeholder.
The architecture of Global AI
Figure 2 shows a schematic map of global internet backbone—a network of networks, distributed across the globe from numerous hubs and interconnected by cables.21 Each circle represents a hub where traffic intersects and is routed to and from the cables, and the size and width of the circles and arcs represent the bandwidth needed to transmit the volume of traffic and content to other locations or regions.

We see this diagram as both a metaphor and a representation of the state of global AI development and governance. AI development rides on the networks and hubs of the internet. These networks enable cloud services, compute power, and the movement of training data and code that are the engines of AI research and development. The geography of these networks also shapes the map for AI governance and the geopolitical lines that affect AI development.
The concentrations of networks and nodes correlate with centers of AI R&D and, where these are concentrated, so are many of the leading developers of AI models. In turn, this maps onto the various governments and groupings that have been early movers in domestic AI governance, as well as leaders building cooperation in AI governance at the international level.
Another significant concentration of necessary inputs for AI that also shapes the landscape of AI governance is capital being invested into AI companies. Figure 3 shows the relative amount of venture investment in 2024 in specific AI companies by the country or region in which the companies are based.22 It shows that the U.S. (the red block) is by far the largest source of venture investment, followed by Asia (yellow) and Europe (orange).

This pattern is not limited to venture investment. The Center for Security and Emerging Technology’s Country Activity Tracker on AI estimates total U.S. investment at $763 billion, compared to $97 billion for China and $46 billion for France and Germany combined.23
The data on AI investment and the companies involved underscore the initial U.S. lead in developing models and use cases for generative AI. The investment data also shows that even though Europe is a center of AI R&D and despite the concentration of infrastructure and knowledge reflected in the world’s densest concentration of internet hubs, these hubs are not sufficiently interconnected, and the EU is not generating the startups and AI advancement that draw investment. The figures also reflect the global digital divide: The network infrastructure in Africa (the light blue at the lower right in Figure 2) is disproportionately small in relation to the continent’s area and population, with venture investment (the barely noticeable blue square at the lower right of Figure 3) even smaller, making starkly clear its critical path toward AI diffusion.
The infrastructure and the investment that support AI development and deployment shape leadership in the technology, and therefore geopolitical interests. As these shift—for example, if the ability to build successfully on open models as China has done with DeepSeek R1 changes the landscape24—then the map of AI governance will also change. Because it is difficult to forecast AI developments and how they will affect the future map of emerging AI, it is vital that governance be agile to react to shifting enterprises, capabilities, and risks. AI development will remain dynamic, so governance must as well.
A networked approach to AI governance
These figures above provide a useful conceptual template for the structures of international AI governance. First, as the next section discusses in greater detail, the existing landscape of AI governance is expanding and evolving through an assortment of networks that reflect the channels and concentrations shown in the figures. Second, a network of networks offers advantages in addressing the multifaceted challenges of AI governance. Key features of information networks like the internet are diversity, redundancy, and resiliency, which are complementary to each other. The inherent diversity and fluidity of the internet’s architecture—a virtually unlimited variety of nodes, hubs, and links—enables information to move by the fastest path. This abundance of pathways leads to redundancy as a feature. In turn, this diversity and redundancy provide resiliency against delay or failure at particular points.
Another feature of networks is the phenomenon termed “network effects.” Metcalfe’s Law characterizes this as a function of the square of the number of connection points.25 Hence, a multiplicity of interconnected networks accelerates the spread of information, potentially accelerating the development of AI and expanding information available to monitor and assess AI risk.
The scale and complexity of foundational AI models, the variety and uncertainties of the issues involved, and the differences in interests and advancement among the actors do not lend themselves to systematically ordered resolution. Instead, the shape of AI governance needs to be adaptable to these variations and to the existing landscape. A variety of international forums with adaptable participation create the hubs and interconnections of an AI governance system which can iterate and adapt to the diverse range of AI governance needs. Some efforts may duplicate others, but the resulting diversity and redundancy introduce natural experiments in AI governance, with some efforts succeeding beyond expectations, others surviving, and some failing.
The governance of the internet infrastructure shown in Figure 2 is instructive for AI governance. There is no central governing body for the internet; instead, the systems, standards, and protocols that enable traffic to move across the networks and nodes are established by a loose aggregation of multistakeholder organizations with limited government involvement. One of these organizations, the Internet Corporation for Assigned Names and Numbers (ICANN), was originally set up to manage the system of internet domain names and numbers as a nonprofit corporation, with a board of directors appointed by a U.S. agency but largely left to its own devices.26 After the U.S. decided to sever any connection ICANN had with the government entirely, a new governance scheme was developed that made the board accountable to “the stakeholders of the internet” (a process in which one of the authors—Kerry—participated first as a government official and later as a lawyer at the firm that advised on the new governance plan).27 Governance of AI involves a much broader set of public policy issues than does the technical interoperability of digital communications systems and thus requires greater participation of policymakers and a wider cross-section of stakeholders. Nevertheless, AI governance also involves a high level of technical complexity, and the internet experience provides both a demonstration that self-organizing stakeholder governance can work in an enduring way at the international level, and a caution insofar as network effects have magnified the spread of harmful information and the vulnerabilities of internet engineering have required retrofitting for security and privacy.
It may be fitting that networks should present a model for governing AI. After all, it was conceiving of machine learning systems as neural networks that provided the key breakthrough for generative AI. John Hopfield’s Nobel Prize-winning paper summarizes the operation of these networks as “the spontaneous emergence of new computational capabilities from the collective behavior of large numbers of simple processing elements,” or the “neurons,” and [t]heir asynchronous parallel processing capability would provide rapid solutions to some classes of computational problems.28 Perhaps the collective behavior of numerous governance processes can arrive in parallel at solutions to the governance problems of AI itself. The next section looks at the main centers of energy emerging in the international landscape, the governance functions they perform, and the ways they reflect network architecture.
The overarching aim of AI governance is to realize the benefits of AI while exploring and managing the risks. This dual goal provides a frame for nearly every statement on the subject at national and international levels regardless of variations in national and regional interests, cultures, and capacity. These also share a general premise that the aims of AI governance need international cooperation because, even more than previous digital technologies, the risks, development, and applications of AI operate on a global scale.
A result is that AI development and policy have prompted a high degree of international cooperation. National governments have been quick to embrace international engagement, recognizing that, with AI transcending national boundaries in important respects, domestic AI governance alone is insufficient to maximize the opportunities and address risks.29 For example, aligning policies in common and interoperable approaches to identifying and mitigating AI risks will help AI companies build responsible AI for a global market and strengthen national efforts to manage them. Expanding access to AI among LMIs cannot be accomplished without effective global cooperation.
For many in the LMI countries, much of the work around AI governance is seen as a subset of a broader AI-for-development agenda that uses AI as a tool for meeting development needs. This agenda is still being pieced together not only at the UN, but also among the multilateral and regional development banks, governments aid programs, philanthropies, and private sector actors. All these have necessary roles to play in addressing local development needs and financing AI development that meets these needs. While the UN can play a leadership role in promoting AI for development as a global issue, mobilizing political attention, and maybe funding, the work of developing the AI governance mechanisms, expanding access to AI infrastructure, building the relevant applications, and funding will rest primarily with these more distributed actors.
In a decade, many international efforts around AI governance have sprung up.30 These span multilateral organizations and groupings, other international organizations and civil society organizations, companies, academia, and prominent individuals. Even the Vatican has weighed in on ethical, transparent, and responsible AI.31 Initially, governance initiatives focused on broad principles and studies, especially around AI ethics. As AI advancement and public awareness have picked up, however, these have moved from these broad aspirations toward specific use cases and rules (whether legislation, regulations and other administrative actions, or soft law), standards, measures, tools, and processes with distinct functions. This section looks at key examples of this progression, the functions involved, and the propagation of ideas across various channels.

The G7 as incubator
The first set of broad-based international AI ethics principles came from a standards development organization, the IEEE Ethically Aligned Design in 2015.32 The earliest intergovernmental mover was the G7. It took up AI in 2016, even before most national governments, when Japan proposed to G7 digital ministers a study of “networked AI.”33 In turn, this project was tasked to the OECD and developed into OECD recommendations on AI ethics, adopted in 2019, widening the uptake to OECD members and others to 44 countries.34 The OECD principles became a basis for a policy declaration by the G20 and have influenced other principles and policies.35
Under Canadian and French presidencies in 2017 and 2018, the G7 initiated the Global Partnership on AI (GPAI), which launched with 15 members in 2020.36 GPAI was initially conceived as an “international panel on AI” along the lines of the International Panel on Climate Change (IPCC) and brought together experts and government participants in working groups in implementation of the OECD principles, growing to 29 members. The OECD and GPAI then pooled their overlapping membership and expert networks in 2024 by entering an “integrated partnership,” with 44 participating governments from seven continents.37
The G7 was the venue for a further initiative in the months following the release of ChatGPT and other large generative models, again under a Japanese presidency. That country launched the “Hiroshima AI Process” on generative AI and,38 in the U.S.-EU Trade and Technology Council, there was agreement to develop a proposed code of conduct for such models.39 In turn, the White House obtained a set of voluntary commitments from major developers of foundational models,40 which became the basis for the G7 Hiroshima Principles and code of conduct adopted in late 2023.41 This outcome spawned a “Hiroshima Process Friends Group” in support of the code of conduct that now comprises 49 governments beyond the G7.42 In 2024, another hub and network, the OECD—with its reach extended through GPAI—completed multistakeholder development of a reporting function for companies’ implementation of the Hiroshima code of conduct.43 This work will supplement the OECD’s ongoing work to develop incident reporting systems for AI.44
This illustrates how the G7 has operated as key hub for AI governance, with its initiatives spreading rapidly and built on through other hubs (the OECD Secretariat) and networks (the G20, OECD members, GPAI members, the Hiroshima Friends group). Even though the G7 is a distinctly narrow and wealthy group, it is logical for these initiatives to emerge from the base.45 These are the countries where AI is being developed and deployed the fastest and the widest;46 they therefore have clear jurisdiction over the entities involved and face a more immediate responsibility to deal with potential risks of AI to their populations or national interests.47 Because these countries are motivated and relatively likeminded, these smaller groupings have proved more agile and more able to align national interests. They also can draw on wide expertise and other resources, allowing them to engage in the complex research-based work and engagement needed to assess AI issues.
The example of the G7 and its linked hubs and networks highlights how the various efforts to build international cooperation on AI build on each other but also exist independently of one another. Such multiple centers of energy build resilience in AI governance by avoiding single points of failure and distributing functions. They also create opportunities to make progress with various sets of countries and other stakeholders depending on willingness and capacity to engage. In contrast, a larger or more centralized structure for international AI governance risks being slowed by bureaucracy and geopolitical tensions.
Evolving global networks and hubs
There are additional, more distributed networks than those linked to the G7 and other hubs described above. The OECD principles were widely influential in shaping approaches to AI, but the OECD was not alone in developing ethics principles for AI. By 2023, a study identified 200 ethics principles and frameworks from organizations of all kinds.48 Figure 5 shows the breakdown of the types of organizations, many from governments and corporations, with the largest portion coming from the combination of NGOs and nonprofits.49

Other analysis of ethics documents found considerable convergence in values and principles across ethics documents.50 Recurring principles across these foundations were that AI should be “trustworthy,” (i.e., reliable, safe, and “human-centered”) enhancing humans’ experience and not replacing or harming them.
Most current governance networks are intertwined with civil society and industry groups as well as academic researchers, all of which inform governance functions. In some cases, these experts are performing pure research; in others, they operate in multistakeholder forums, participate on AI standards in global standards bodies, consult with developers of the models, or advise intergovernmental forums.51 These distributed networks of experts contribute to identifying and addressing a range of AI risks, such as potential misuses or malfunctions of generative AI models but also more patent or immediate risks like malign uses of synthetic content, bias, and discrimination. Many have participated in the expert networks of the OECD and GPAI,52 expert panels like the UN High-Level Advisory Body and the U.K.’s scientific panel on model safety,53 meetings of other international organizations, or events like the roundtables that the authors of this paper have convened for almost five years.54
New governance initiatives aimed at AI are still emerging, a form of governance entrepreneurship manifesting itself as governments contest for AI leadership. The most concretely advanced have been in the field of risk and safety, where the U.K. convened its Bletchley AI Safety Summit in October 2023.55 From that event emerged a growing number of AI safety institutes (AISIs), with the U.K., U.S., Korea, Canada, and France establishing or announcing an AISI and the EU AI Office and Singapore performing safety functions and participating in a network of AISIs announced in November 2024.56 China joined 27 other countries in the Bletchley Declaration emerging from that initial summit but not the Seoul Declaration that followed.57 Even so, China has established a “China AI Development and Safety Network,” a collaboration of leading universities and research institutes.58
Safety research can develop the policies and guidance in areas such as testing, measurement, and red-teaming focused on synthetic content risk, foundation model testing, and common approaches to risk assessment, also with the aim of aligning and recognizing various approaches to AI safety that emerge from the AISIs. Company consortia have also joined work on the safety function, with a Frontier Model Forum, the AI Alliance, and the French ArGiMi Consortium testing major models and compliance with the commitments embodied in the Hiroshima Principles.59 France is hosting the next safety summit in February 2025, billing it as an AI Action Summit and propounding an ambitious agenda much broader than just AI safety.60
This proliferation of channels for assessing and managing AI risk and safety shows both the catalyzing impact from the advent of generative AI and the interplay of various nodes and networks of AI governance. Other countries and regions are assessing whether they need for additional safety analysis specific to their populations and societies, and the spread of safety bodies can build networks of incident reporting that can help to fill gaps in concrete information about AI risks.
Another form of governance entrepreneurship is the EU’s AI Act regulating “high-risk” AI systems and general-purpose AI; intended primarily to enable a single market for AI systems, it also aims to fashion global rules for AI,61 just as the EU’s General Data Protection Regulation operated as a significant template for other countries adopting privacy and data protection laws and seeking to trade in the EU.62 The European Commission is working to implement the AI Act through codes of conduct on transparency, risk mitigation, and internal governance for general-purpose AI and on development of technical standards for conformity assessments.63 Notably, while the European Commission will oversee compliance with the AI Act and has a hand in shaping the codes of conduct and standards, the drafting of the codes is being done by a large group of stakeholders.64 The U.S. National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, released in 2023, charts a more flexible approach, specifically intended not to be one-size-fits-all and adaptable to varying levels of risk.65 Brazil and India are in the process of developing AI legislation, with Brazil’s resembling the EU’s and India looking to adopt a lighter touch with an eye to innovation and development, while still addressing risks.66
China has adopted some of the earliest domestic AI legislation that, like the EU’s law, adds to a body of laws in the digital arena.67 Its AI-specific rules focus on recommendation algorithms and deepfakes, requiring labeling and prohibiting disruption of social order; another rule creates liabilities for illegal content created with public-facing generative AI and obligations to use high-quality training data that respects intellectual property; China also requires that generated content be consistent with Communist Party of China doctrine and “Xi Jinping thought.”68 Notwithstanding this regulatory scheme and U.S. controls on the most powerful processing chips, in January 2025, DeepSeek released a chatbot model with capabilities that appear to make it competitive with leading U.S. models on certain benchmarks.69
AI governance interoperability
Another key area of governance that depends on international cooperation is to make domestic approaches to AI governance interoperable. Efforts toward interoperability and alignment of AI policies and regulation involve a more heterogeneous set of issues and national interests than does AI risk and safety—implicating approaches to law, regulation, and national security that operate at the level of nation states. As a result, this governance function involves a wider array of initiatives with greater room for divergence as national governments tailor regulation of use cases and risks to their systems of law and values. Nonetheless, minimizing unnecessary divergence in domestic AI regulation is crucial to reducing costs and barriers to expanding access to AI. Divergent AI regulation can also harm effective oversight of known AI risks that do not stop at borders and enable a race to the bottom or allow companies to arbitrage different AI regulation. In this regard, interoperability is also connected with AI risk management and safety: Interoperability is needed to strengthen security, including common approaches to red-teaming and building the cybersecurity to prevent access to AI systems by malicious actors and mutual recognition by AISIs of each other’s work. Common approaches to data governance can strengthen data quality and reliability, as well as facilitate data-sharing across borders.
A 2021 report for the Forum for Cooperation on AI by three of the present authors identified several concrete areas of focus for international cooperation: regulatory alignment around common definitions; understanding of risk and redlines, approaches to auditing, and sectoral cooperation; mechanisms for sharing data; standards development; and joint projects or challenges in socially beneficial technologies like privacy-enhancing technologies (PETs).70
Governments are engaged in a wide variety of additional efforts to promote interoperability. This function has been central to international cooperation on ethical frameworks, AI safety, and other plurilateral and bilateral initiatives from the start. Governments continue to build out networks around common approaches. The EU-U.S. Trade and Technology Council and U.S.-U.K. Tech and Data Dialogue, both initiated in 2022, are examples of concrete efforts to align on specific topics in AI governance, such as definitions, approaches to risk, and standards development on a bilateral basis.71 Not all efforts at building interoperability flow from international cooperation. The U.S. NIST’s AI Risk Management Framework (RMF) and 2024 RMF for generative AI profile were domestic U.S. efforts developed in collaboration with the private sector.72 The AI RMF uses a combination of process-based guidance and reference to international AI standards to provide a tool that organizations globally can use to identify and manage AI risk. China launched a global AI governance initiative in 2023 as part of its Belt and Road network, which has so far been followed up with a diplomatic campaign.73
Trade agreements, which have played a central role in building consensus on digital governance issues, such as how to balance cross-border data flows with appropriate policies to ensure privacy and security, have been limited so far in addressing the many AI governance issues that also intersect with international trade. No intergovernmental discussions of digital trade issues should omit AI governance.
A common understanding of AI founded on the research and multistakeholder engagement of groups like the OECD and GPAI has enabled AI governance leaders in the G7, OECD, and other channels to reach consensus more rapidly and coherently. Significant features of the GDC aim to broaden such understanding. The UN plan for an international scientific panel is aimed explicitly at “scientific understanding through evidence-based impact, risk, and opportunity assessments…”74 In addition, the GDC set in motion UN-hosted “global dialogues” on AI governance to take place alongside UN General Assemblies and other UN gatherings.75
The work of technical standards development organizations (SDOs) is perhaps the most broad, concrete, and advanced set of efforts on interoperability. Engagement on standards development has been a recurring subject of governmental initiatives on AI because standards can provide the technical and management architecture for implementing outcomes expressed in principles and policies and do so in ways that can bridge differences in laws or legal systems. This is why the EU-U.S. Trade and Technology Council discussions on AI included sharing priorities for standardization,76 and the G7 Hiroshima summit in 2023 affirmed support for “standards development organizations through multi-stakeholder processes.”77 The EU AI Act provides for standards to establish conformity assessment with the Act’s requirements, with a strong preference for adoption of international standards.78 The pan-European SDO CEN-CENELEC is working on additional European standards requested by the European Commission.79 The GDC identifies needs for a variety of standards and calls on SDOs “to collaborate to promote the development and adoption of interoperable [AI] standards that uphold safety, reliability, sustainability, and human rights” (side-stepping a proposal by the UN’s High-Level Advisory Body on AI for a new UN standards coordinating body).80
The U.K. Standards Hub tracks international standards development and identifies some 254 AI-specific standards adopted or under development. The largest number of these—21 published and eight more under development—come from the joint committee of the International Standards Organization and International Electrical Commission formed to deal with AI standards, ISO/IEC JTC 1/ SC 42.81 This committee has members from 59 countries, a larger number than the OECD, GPAI, and the Friends of Hiroshima group.82 The two other international SDOs most involved in AI standards, the IEEE Standards Organization,83 and International Telecommunications Union,84 have even broader networks. The products of these processes can provide significant pathways for interoperability.
Figures 2 and 3 illustrate starkly the gaps that exist between wealthier nations and populations and much of the world when it comes to AI. AI could accentuate these gaps if wealth flows to the top of AI value chains or low-to-moderate income countries are unable to leapfrog the barriers they face. For the latter countries, overcoming barriers to AI deployment and adoption is a critical issue in realizing the opportunities of AI in the near term. For countries where AI is already advanced, failure to overcome these barriers is an opportunity cost for their economic, social, and political capital in the long run. While some that have AI associate it with existential risk,85 most with little AI see not deploying it as the existential risk.86
The African Union’s broad continental strategy adopted in July 2024 aims at realizing opportunities to use AI as a positive force for achieving Sustainable Development Goals.87 It notes “limited infrastructure and skills” as key inhibitors of AI uptake and therefore identifies “infrastructure capabilities, particularly in the area of energy, broadband connectivity, data centers and cloud computing, computing platforms such as high-performance computing and IoTs and quality data…[as] key to the development of AI solutions and systems.”88 Building these capabilities will need to involve a multitude of coalitions and networks of high-GNP and LMI countries along global, regional, geopolitical, and functional lines. Unlike, say, conducting red-teaming, a small handful of players is insufficient for this large task. The UN has a vital organizing function as an institution that cuts across all these lines and can be a catalyst for building the networks and hubs needed.
The form and scope of this UN role is yet to be determined. Both the U.S. and China General Assembly resolutions on AI addressed bridging the digital divide,89 with China’s version putting “capacity-building” in the title and affirming for a “central and coordinating role” for the UN.90 In the run-up to the adoption of the GDC, the U.S. announced a $100 million commitment for program to expand access to AI models, building human capacity, and developing local data sets around the world, with $30 million in U.S. foreign aid and the rest from leading AI companies in funding or credits.91 During this time, China proposed an “AI capacity-building action plan for good and all,” calling for all parties to increase investment in capacity-building and to conduct a further capacity-building workshop.92 These reflect differences in approach, with the U.S. relying on stakeholder participation and China “upholding multilateralism” (i.e., operating entirely within the UN system where member states decide). The Belt and Road Initiative has made China the largest provider of development financing; the creation of a Belt and Road AI initiative makes AI finance a component of this investment. Between 2000 and 2017, the Chinese government supported global development finance projects worth nearly $4.5 billion in over 50 countries.93
The UN HLAB and a draft of the GDC language proposed funding for AI capacity-building to be managed by the UN Secretariat.94 The General Assembly did not support this proposal; instead, it called for the development of “international partnerships” and “leverag[ing] existing United Nations and multi-stakeholder mechanisms” for AI capacity-building and expanded access.95 This outcome reflects the competing views on the role of the UN in AI governance and the difficulty of reaching meaningful agreement within the UN given geopolitical tensions between the U.S. and China and others, in particular when it comes to AI.
Infrastructure is not the sole means for promoting AI opportunities for LMI countries. They also need datasets and models capable of training on smaller data volumes than those used by well-known LLMs, which are less compute-intensive and can be fine-tuned to applications that serve the distinct needs of their populations. For example, IBM Research Brazil and the University of Sao Paulo have worked on fine-tuning models with small datasets for languages like Guarani Mbya and Nheengatu, involving community leaders in guiding the projects.96 Meta’s “No Language Left Behind” program offers open-source AI models that translate between 200 languages, including indigenous ones.97 In South Africa, LelapaAI’s InkubaLM is a small language model trained on isiZulu, Yoruba, Swahili, isiXhosa, and Hausa.98
In parallel with these efforts, smaller models and new “reasoning” models —such as Microsoft’s Phi-4, OpenAI’s GPT-4o mini models, and DeepSeek R1—use less compute power while outperforming larger models at certain tasks, enabling broader access to AI language technologies, including on mobile devices.99 Collaborations between companies and local researchers are crucial to ensuring these models are culturally-appropriate and effective.
These various pathways for developing AI governance at the international and domestic levels describe a fluid governance space with scope to build coalitions, identify opportunities for progress as they arise, and respond nimbly to developments in AI technology. Where a given governance initiative fits varies widely depending on the functions associated with AI risk or opportunity, and different bodies, forums, or sectors have different comparative advantages. It is too early to chart the landscape fully or precisely because the nodes and networks involved are widely distributed and developing rapidly along with the technology. Going forward, the focus should be on expanding this distributed global landscape to ensure that the international AI governance functions are being fulfilled in ways that involve the largest number of stakeholders needed to deliver effective outcomes.
The range of distributed AI governance forums across intergovernmental forums in various combinations, standards development organizations, and stakeholder groups remains a work in progress. More will be needed as issues that need international cooperation continue to expand, and the inclusivity of these AI governance forums will need to be both functionally appropriate and sufficient to achieve legitimacy. In particular, these dual needs will require building out G7 outcomes to a wider range of governments and additional stakeholders. The inclusion in the G7 of the African Union as a participant and of 56 governments from all continents in the Friends of the Hiroshima process are significant steps, but broader outreach is still needed.100 Development of AI standards in global standards bodies is another area where expanding access to governments, industry, and civil society, especially in LMI countries, is needed to strengthen the legitimacy of these standards and ensure that the resulting standards respond to diverse AI needs.
As AI governance initiatives progress, networked and distributed governance will remain the singular form of international cooperation that can respond to the rapid pace at which AI is developing. While the future course of AI is uncertain, it is certain that AI models and applications will develop in ways that raise new challenges and opportunities. Such new developments will demand rapid iterative governance responses from ad-hoc coalitions of governments, industry, and civil society, just as the six-month span of response to the ChatGPT-3 produced the Hiroshima Process, the White House voluntary commitments on AI, the Bletchley Declaration, and codes of conduct implementing the Hiroshima Principles. Whatever impact DeepSeek has on how and where models are trained going forward, its development underscores that, for AI governance to be effective, it must adapt and respond to rapid and dazzling technological changes. Building and expanding an iterative and networked approach to AI governance will be key.
-
Acknowledgements and disclosures
We are grateful for the research assistance of Derek Belle, Mishaela Robison, Brooke Tanner, and Joshua Turner at the Brookings Institution. Derek Belle also acted as project manager for FCAI throughout the evolution of this paper. We are also thankful for editing and production assistance from Antonio Saadipour, Camille Busette, and Adelle Patten of the Brookings Institution.
Meta and Microsoft are donors to the Brookings Institution. Brookings and CEPS recognize that the value they provide is in their absolute commitment to quality, independence, and impact. The findings, interpretations, and conclusions in this report are not influenced by any donation.
The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its authors, and do not reflect the views of the Institution, its management, or its other scholars. For more, visit www.brookings.edu.
The Centre for European Policy Studies (CEPS) is an independent policy research institute in Brussels. Its mission is to produce sound policy research leading to constructive solutions to the challenges facing Europe. Facebook, Google, and Microsoft are donor members of CEPS. The views expressed in this report are entirely those of the authors and should not be attributed to CEPS or any other institution with which they are associated or to the European Union.
-
Footnotes
- OpenAI, “Introducing ChatGPT.” Last modified November 30, 2022. https://openai.com/index/chatgpt/.
- OECD, “Recommendation of the Council on Artificial Intelligence.” May 22, 2019. OECD/LEGAL/0449; UNESCO; “Recommendation on the Ethics of Artificial Intelligence.” 2022. https://unesdoc.unesco.org/ark:/48223/pf0000381137.
- African Union, “Continental Artificial Intelligence Strategy.” July 2024. https://au.int/sites/default/files/documents/44004-doc-EN-_Continental_AI_Strategy_July_2024.pdf.
- “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.” G7 Hiroshima Summit. https://www.mofa.go.jp/files/100573473.pdf; “ASEAN Guide on AI Governance and Ethics.” ASEAN. https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf.
- European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Brussels, April 21, 2021. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206.
- Prithvi Iyer, “From Safety to Innovation: How AI Safety Institutes Inform AI Governance.” Tech Policy Press, October 25, 2024. https://www.techpolicy.press/from-safety-to-innovation-how-ai-safety-institutes-inform-ai-governance/.
- United Nations, “About the UN Secretary-General’s High-level Advisory Body on AI.” https://www.un.org/en/ai-advisory-body/about.
- United Nations Educational, Scientific and Cultural Organization (UNESCO), “How the ISO and IEC are Developing International Standards for the Responsible Adoption of AI.” Last modified August 6, 2024. https://www.unesco.org/en/articles/how-iso-and-iec-are-developing-international-standards-responsible-adoption-ai.
- United Nations, “Global Digital Compact.” https://www.un.org/techenvoy/global-digital-compact United Nations. “Global Digital Compact.” Last modified September 22, 2024. https://www.un.org/global-digital-compact/sites/default/files/2024-09/Global%20Digital%20Compact%20-%20English_0.pdf, page 13.
- United Nations, “Secretary-General Urges Security Council to Ensure Transparency, Accountability, Oversight, in First Debate on Artificial Intelligence.” July 18, 2023. https://press.un.org/en/2023/sgsm21880.doc.htm OpenAI. “Governance of Superintelligence.” Last modified May 22, 2023. https://openai.com/index/governance-of-superintelligence/#SamAltman.
- “Global AI Governance.” AI Action Summit, October 2, 2024. https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia/global-ai-governance.
- Yoshua Bengio, chair, International AI Safety Report. January 29, 2025. https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf, p. 21.
- Tamkin, Alex, and Deep Ganguli, “How Large Language Models Will Transform Science, Society, and AI.” Stanford HAI, February 5, 2021. https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-ai.
- Ray Perrault and Jack Clark, “Artificial Intelligence Index Report 2024.” Stanford HAI, April 15, 2024. https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf.
- Case Western Reserve University, “Advancements in Artificial Intelligence and Machine Learning.” Last modified March 25, 2024. https://online-engineering.case.edu/blog/advancements-in-artificial-intelligence-and-machine-learning; Stanford University. Artificial Intelligence Index Report 2024. May 2024. https://aiindex.stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf
- Zhao, Zehui, Laith Alzubaidi, Jinglan Zhang, Ye Duan, Yuantong Gu, “A comparison review of transfer learning and self-supervised learning: Definitions, applications, advantages and limitations.” Expert Systems with Applications, Volume 242, 2024, 122807, ISSN 0957-4174, https://doi.org/10.1016/j.eswa.2023.122807.
- Nestor Maslej, “Inside The New AI Index: Expensive New Models, Targeted Investments, and More.” Stanford HAI, April 15, 2024. https://hai.stanford.edu/news/inside-new-ai-index-expensive-new-models-targeted-investments-and-more.
- Royal Swedish Academy of Sciences, “The Nobel Prize in Chemistry 2024.” October 9, 2024. https://www.nobelprize.org/prizes/chemistry/2024/press-release/; Claire Moses, Cade Metz and Teddy Rosenbluth. “Nobel Prize in Chemistry Awarded to Trio Who Cracked the Code of Proteins.” The New York Times, October 9, 2024. https://www.nytimes.com/2024/10/09/science/nobel-prize-chemistry.html.
- M. Lederer, “UN adopts first resolution on artificial intelligence.” Associated Press, March 22, 2024. https://apnews.com/article/united-nations-artificial-intelligence-safety-resolution-vote-8079fe83111cced0f0717fdecefffb4d; Asma Khalid, “Biden and Xi take a first step to limit AI and nuclear decisions at their last meeting.” NPR, November 16, 2024. https://www.npr.org/2024/11/16/nx-s1-5193893/xi-trump-biden-ai-export-controls-tariffs. NPR, November 16, 2024. https://www.npr.org/2024/11/16/nx-s1-5193893/xi-trump-biden-ai-export-controls-tariffs.
- “Do you know all 17 SDGs?” UN. https://sdgs.un.org/goals.
- “Global Internet Map 2018.” Telegeography. https://global-internet-map-2018.telegeography.com/.
- “Top AI Players.” AIWorld.EU, 2025. https://aiworld.eu/.
- “Country Activity Tracker (CAT): Artificial Intelligence.” Emerging Technology Observatory, December 18, 2024. https://cat.eto.tech/?dataset=Investment&expanded=Summary-metrics
- Aili McConnon, “DeepSeek’s reasoning AI shows power of small models, efficiently trained.” IBM, January 27, 2025. https://www.ibm.com/think/news/deepseek-r1-ai.
- Dmitri Nosovicki, “Metcalfe’s Law Revisited,” 2016. 10.48550/arXiv.1604.05341.
- “Board of Directors.” ICANN. https://www.icann.org/en/board/about.
- Doria, Avri, and Wolfgang Kleinwächter, editors. Internet Governance Forum (IGF) The First Two Years. 2007. https://digitallibrary.un.org/record/3907205/files/IGF2007.pdf; “Contribution for the Leadership Panel.” ICANN, January 3, 2024. https://itp.cdn.icann.org/en/files/government-engagement-ge/icann-contribution-igf-leadership-panel-01-03-2024-en.pdf.
- NobelPrize.org, “Nobel Prize Outreach 2025.” January 27, 2025. https://www.nobelprize.org/prizes/physics/2024/press-release; J.J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities.” Proc. Natl. Acad. Sci. U.S.A., 1982. 79 (8) 2554-2558, https://doi.org/10.1073/pnas.79.8.2554.
- Chernenko, Elena, Oleg Demidov, and Fyodor Lukyanov, “Increasing International Cooperation in Cybersecurity and Adapting Cyber Norms.” Council on Foreign Relations, February 2018. https://www.cfr.org/report/increasing-international-cooperation-cybersecurity-and-adapting-cyber-norms.
- National Science and Technology Council Committee on Technology, “Preparing for the Future of Artificial Intelligence.” Executive Office of the President, October 2016. https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf/; Matt Sheehan, “China’s AI Regulations and How They Get Made.” Carnegie Endowment for International Peace, July 10, 2023. https://carnegieendowment.org/research/2023/07/chinas-ai-regulations-and-how-they-get-made?lang=en; “OECD AI Group of Experts (AIGO).” The Future Society, February 10, 2019. https://thefuturesociety.org/oecd-ai-group-of-experts-aigo/; “OECD AI Principles overview.” OECD.AI. https://oecd.ai/en/ai-principles; “About GPAI.” GPAI.AI. https://gpai.ai/about/; “EU AI Act: first regulation on artificial intelligence.” European Parliament, August 6, 2023, https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence; Event. “Unpacking the White House blueprint for an AI Bill of Rights.” The Brookings Institution, December 5, 2022. https://www.brookings.edu/events/unpacking-the-white-house-blueprint-for-an-ai-bill-of-rights/; ISO/IEC. Information technology — Artificial intelligence — Guidance on risk management. ISO/IEC 23894:2023, Edition 1. International Standards Organization, 2023. https://www.iso.org/standard/77304.html; “Executive Order 14110 of October 30, 2023, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” Federal Registrar, Filed October 31, 2023. https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence; Yoshua Bengio, May 17, 2024. https://www.gov.uk/government/publications/international-scientific-report-on-the-safety-of-advanced-ai; “Council of Europe adopts first international treaty on artificial intelligence.” Council of Europe, May 17, 2024. https://www.coe.int/en/web/portal/-/council-of-europe-adopts-first-international-treaty-on-artificial-intelligence.
- Carol Glatz, “Vatican City State puts AI guidelines in place.” United States Conference of Catholic Bishops, January 16, 2025. https://www.usccb.org/news/2025/vatican-city-state-puts-ai-guidelines-place.
- “Ethically Aligned Design.” IEEE, December 16, 2023. https://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v1.pdf.
- Outcomes of the G7 ICT Ministers’ Meeting in Takamatsu, Kagawa.” Ministry of Internal Affairs and Communications. https://www.soumu.go.jp/joho_kokusai/g7ict/english/about.html.
- “Recommendation of the Council on Artificial Intelligence.” May 22, 2019. OECD/LEGAL/0449.
- “G20/OECD Principles of Corporate Governance 2023.” OECD, September 11, 2023. https://www.oecd.org/en/publications/2023/09/g20-oecd-principles-of-corporate-governance-2023_60836fcb.html.
- Andrew W. Wyckoff, “A new institution for governing AI? Lessons from GPAI.” The Brookings Institution, September 20, 2024. https://www.brookings.edu/articles/a-new-institution-for-governing-ai-lessons-from-gpai/.
- “Global Partnership on Artificial Intelligence.” OECD. https://www.oecd.org/en/about/programmes/global-partnership-on-artificial-intelligence.html.
- “The Hiroshima AI Process: Leading the Global Challenge to Shape Inclusive Governance for Generative AI.” Kizuna, February 9, 2024. https://www.japan.go.jp/kizuna/2024/02/hiroshima_ai_process.html.
- “Joint Statement EU-US Trade and Technology Council of 4-5 April 2024 in Leuven, Belgium.” EU Commission, April 4, 2024. https://ec.europa.eu/commission/presscorner/detail/en/statement_24_1828.
- “FACT SHEET: Biden-Harris Administration Announces New AI Actions and Receives Additional Major Voluntary Commitment on AI.” White House, July 26, 2024. https://bidenwhitehouse.archives.gov/briefing-room/statements-releases/2024/07/26/fact-sheet-biden-harris-administration-announces-new-ai-actions-and-receives-additional-major-voluntary-commitment-on-ai/.
- “Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems.” G7 Hiroshima Summit. https://www.mofa.go.jp/files/100573473.pdf.
- “The Hiroshima AI Process Friends Group kicks off.” DigWatch, May 3, 2024. https://dig.watch/updates/the-hiroshima-ai-process-friends-group-kicks-off.
- “OECD launches pilot to monitor application of G7 code of conduct on advanced AI development.” OECD, July 22, 2024. https://www.oecd.org/en/about/news/press-releases/2024/07/oecd-launches-pilot-to-monitor-application-of-g7-code-of-conduct-on-advanced-ai-development.html.
- “AI Incidents.” OECD.AI. https://oecd.ai/en/site/incidents.
- Variengien, Alexandre and Charles Martinet, “AI Safety Institutes: Can countries meet the challenge?” OECD.AI, July 29, 2024. https://oecd.ai/en/wonk/ai-safety-institutes-challenge.
- Zhang, Daniel, Nestor Maslej, Erik Brynjolfsson, John Etchemendy, Terah Lyons, James Manyika, Helen Ngo, Juan Carlos Niebles, Michael Sellitto, Ellie Sakhaee, Yoav Shoham, Jack Clark, and Raymond Perrault, “The AI Index 2022 Annual Report.” Cornell University, Submitted May 2, 2022. https://doi.org/10.48550/arXiv.2205.03468.
- Karen Freifeld, “US tightens its grip on AI chip flows across the globe.” Reuters, January 13, 2025. https://www.reuters.com/technology/artificial-intelligence/us-tightens-its-grip-ai-chip-flows-across-globe-2025-01-13/.
- Kluge Corrêa, Nicholas, Camila Galvão, James William Santos, Carolina Del Pino, Edson Pontes Pinto, Camila Barbosa, Diogo Massmann, Rodrigo Mambrini, Luiza Galvão, Edmund Terem, Nythamar de Oliveira, “Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance.” Patterns, Volume 4, Issue 10, 2023, 100857, ISSN 2666-3899, https://doi.org/10.1016/j.patter.2023.100857.
- Kluge Corrêa, Nicholas, Camila Galvão, James William Santos, Carolina Del Pino, Edson Pontes Pinto, Camila Barbosa, Diogo Massmann, Rodrigo Mambrini, Luiza Galvão, Edmund Terem, Nythamar de Oliveira, “Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance.” Patterns, Volume 4, Issue 10, 2023, 100857, ISSN 2666-3899, https://doi.org/10.1016/j.patter.2023.100857.
- Jobin, Anna, Marcello Ienca, and Effy Vayena, “The global landscape of AI ethics guidelines.” Nature Machine Intelligence 1, no. 9 (2019): 389–99. doi:10.1038/s42256-019-0088-2.
- “AI experts from around the world connect for a multi-stakeholder dialogue on responsible AI for peace and security.” United Nations, September 25, 2023. https://disarmament.unoda.org/update/ai-experts-from-around-the-world-connect-for-a-multi-stakeholder-dialogue-on-responsible-ai-for-peace-and-security/.
- “OECD Working Party on Artificial Intelligence Governance (AIGO).” OECD.AI. https://oecd.ai/en/network-of-experts; “The Global Partnership on Artificial Intelligence.” GPAI / The Global Partnership on Artificial Intelligence. https://gpai.ai/about/.
- “Final Report – Governing AI for Humanity.” AI Advisory Body (UN), September 2024. https://www.un.org/en/ai-advisory-body; Department for Science, Innovation and Technology and AI Safety Institute, “International Scientific Report on the Safety of Advanced AI.” Gov.uk, May 17, 2024.
- “The Forum for Cooperation on Artificial Intelligence.” The Brookings Institution. https://www.brookings.edu/projects/the-forum-for-cooperation-on-artificial-intelligence/.
- Foreign, Commonwealth & Development Office, Department for Science, Innovation and Technology, and AI Safety Institute, “AI Safety Summit 2023.” Gov.uk. https://www.gov.uk/government/topical-events/ai-safety-summit-2023.
- Allen, Gregory, and Georgia Adamson, “The AI Safety Institute International Network: Next Steps and Recommendations.” Center for Strategic and International Studies (CSIS), October 30, 2024. https://www.csis.org/analysis/ai-safety-institute-international-network-next-steps-and-recommendations/.
- “The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023.” Gov.uk, November 1, 2023. https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023; Department for Science, Innovation and Technology, “Seoul Declaration for safe, innovative and inclusive AI: AI Seoul Summit 2024.” Gov.uk, May 21, 2024. https://www.gov.uk/government/publications/seoul-declaration-for-safe-innovative-and-inclusive-ai-ai-seoul-summit-2024.
- “China AI Development and Safety Network.” https://ai-development-and-safety-network.cn/.
- “Frontier Model Forum.” Frontier Model Forum. https://www.frontiermodelforum.org/; “Argimi project.” Institut national de l’audiovisuel. https://www.ina.fr/institut-national-audiovisuel/research/argimi-project.
- “Artificial Intelligence Action Summit.” AI Action Summit. https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia.
- European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Brussels, April 21, 2021. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206.
- Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L 119/1.
- Demetzou, Katerina, Vasileios Rovilos, “Conformity Assessments Under the proposed EU AI Act: A Step-By-Step Guide.” Future of Privacy Forum, November 2023. https://fpf.org/wp-content/uploads/2023/11/OT-FPF-comformity-assessments-ebook_update2.pdf.
- “Second Draft of the General-Purpose AI Code of Practice published, written by independent experts.” European Commission, December 19, 2024. https://digital-strategy.ec.europa.eu/en/library/second-draft-general-purpose-ai-code-practice-published-written-independent-experts.
- “AI Risk Management Framework.” NIST, January 26, 2023. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf.
- “Brazil AI Act.” Artificial Intelligence Act. https://artificialintelligenceact.com/brazil-ai-act/; Amlan Mohanty and Shatakratu Sahu, “India’s Advance on AI Regulation.” Carnegie Endowment for International Peace, November 21, 2024.
- Zeyi Yang, “Four things to know about China’s new AI rules in 2024.” MIT Technology Review, January 17, 2024. https://www.technologyreview.com/2024/01/17/1086704/china-ai-regulation-changes-2024/.
- Matt Sheehan, “China’s AI Regulations and How They Get Made.” Carnegie Endowment for International Peace, July 10, 2023. https://carnegieendowment.org/research/2023/07/chinas-ai-regulations-and-how-they-get-made?lang=en; Linette Lopez, “The rise of ChatCCP.” Business Insider, June 2, 2024. https://www.businessinsider.com/china-xi-jinping-ai-wreak-havoc-america-world-economy-2024-6.
- Hoskins, Peter, and Imran Rahman-Jones, “Chinese AI chatbot DeepSeek sparks market turmoil.” BBC, January 27, 2025. https://www.bbc.com/news/articles/c0qw7z2v1pgo.
- Kerry, Cameron F., Joshua P. Meltzer, Andrea Renda, Alex C. Engler, Rosanna Fanni, “Strengthening International Cooperation on AI.” Brookings and CEPS, October 2021. https://www.brookings.edu/wp-content/uploads/2021/10/Strengthening-International-Cooperation-AI_Oct21.pdf.
- “U.S-EU Joint Statement of the Trade and Technology Council.” White House, April 5, 2024. https://www.commerce.gov/news/press-releases/2024/04/us-eu-joint-statement-trade-and-technology-council; “U.S.-UK Joint Statement: New Comprehensive Dialogue on Technology and Data and Progress on Data Adequacy.” U.S. Department of Commerce, October 7, 2022. https://www.commerce.gov/news/press-releases/2022/10/us-uk-joint-statement-new-comprehensive-dialogue-technology-and-data.
- “AI Risk Management Framework.” NIST, January 26, 2023. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf; “NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile.” NIST, July 26, 2024. https://doi.org/10.6028/NIST.AI.600-1.
- “Global AI Governance Initiative.” The Third Belt and Road Forum for International Cooperation. http://www.beltandroadforum.org/english/n101/2023/1019/c127-1231.html.
- “Global Digital Compact.” United Nations, September 22, 2024. https://www.un.org/global-digital-compact/sites/default/files/2024-09/Global%20Digital%20Compact%20-%20English_0.pdf, p. 13.
- “Global Digital Compact.” United Nations, September 22, 2024. https://www.un.org/global-digital-compact/sites/default/files/2024-09/Global%20Digital%20Compact%20-%20English_0.pdf, p. 13.
- “EU-US Trade and Technology Council.” EU Commission. https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/stronger-europe-world/eu-us-trade-and-technology-council_en.
- Sharma, Keah, and Malhaar Mohrarir, “2023 G7 Hiroshima Goals Set and Met.” G7 Research Group, May 26, 2023. https://g7.utoronto.ca/evaluations/2023hiroshima/goals-met.html.
- O’Brien, Claire, Mark Rasdale, and Daisy Wong, “The role of harmonised standards as tools for AI act compliance.” DLA Piper, January 10, 2024. https://www.dlapiper.com/es-pr/insights/publications/2024/01/the-role-of-harmonised-standards-as-tools-for-ai-act-compliance.
- “Artificial Intelligence.” CEN CENELEC. https://www.cencenelec.eu/areas-of-work/cen-cenelec-topics/artificial-intelligence/.
- “Global Digital Compact.” United Nations, September 22, 2024. https://www.un.org/global-digital-compact/sites/default/files/2024-09/Global%20Digital%20Compact%20-%20English_0.pdf, p. 14.
- “Standards Database.” AI Standards Hub. Accessed January 27, 2025. https://aistandardshub.org/ai-standards-search/?_sft_standards_scope=1-ai-specific&_sfm_committee_reference=ISO%2FIEC%20JTC%201%2FSC%2042.
- “Members and partners.” OECD. https://www.oecd.org/en/about/members-partners.html; “Members.” GPAI / The Global Partnership on Artificial Intelligence. https://gpai.ai/community/.
- “IEEE Membership.” IEEE. https://www.ieee.org/membership-catalog/productdetail/showProductDetailPage.html?product=MEMIEEE500#:~:text=With%20more%20than%20400%2C000%20members,world’s%20largest%20technical%20professional%20society.
- “Membership.” International Telecommunications Union (ITU). https://www.itu.int/hub/membership/our-members/.
- “Is AI an Existential Risk? Q&A with RAND Experts.” RAND, March 11, 2024. https://www.rand.org/pubs/commentary/2024/03/is-ai-an-existential-risk-qa-with-rand-experts.html.
- Abungu, Cecil, Marie Victoire Iradukunda, Raqda Sayidali, Aquila Hassan, and Duncan Cass-Beggs, “Global South Countries Have No Choice but to Care About Advanced AI.” CIGI, December 9, 2024. https://www.cigionline.org/articles/global-south-countries-have-no-choice-but-to-care-about-advanced-ai/.
- “Continental Artificial Intelligence Strategy.” African Union, July 2024. https://au.int/sites/default/files/documents/44004-doc-EN-_Continental_AI_Strategy_July_2024.pdf, p. 1.
- “Continental Artificial Intelligence Strategy.” African Union, July 2024. https://au.int/sites/default/files/documents/44004-doc-EN-_Continental_AI_Strategy_July_2024.pdf, p. 27, 44.
- “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development: draft resolution.” UN Resolution, 2024. A/78/L.49. https://digitallibrary.un.org/record/4040897?ln=en&v=pdf.
- “Enhancing international cooperation on capacity-building of artificial intelligence: draft resolution.” UN Resolution, 2024. A/RES/78/311. https://digitallibrary.un.org/record/4053245?v=pdf&ln=en.
- Imad Khan, “US State Department and Big Tech Will Invest $100 Million in Global AI Access.” CNET, September 23, 2024. https://www.cnet.com/tech/services-and-software/us-state-department-and-big-tech-will-invest-100-million-in-global-ai-access/.
- “AI Capacity-Building Action Plan for Good and for All.” Ministry of Foreign Affairs The People’s Republic of China, September 27, 2024. https://www.mfa.gov.cn/eng/wjbzhd/202409/t20240927_11498465.html.
- Kyra Solomon, “AidData and RAND release new dataset, report on China’s AI exports.” AidData, December 11, 2023. https://www.aiddata.org/blog/aiddata-and-rand-release-new-dataset-report-on-chinas-ai-exports.
- “Secretary-General’s remarks to workshop on artificial intelligence and capacity building [as delivered].” United Nations, September 2, 2024. https://www.un.org/sg/en/content/sg/statement/2024-09-02/secretary-generals-remarks-workshop-artificial-intelligence-and-capacity-building-delivered; https://www.un.org/sg/en/content/sg/statement/2024-09-02/secretary-generals-remarks-workshop-artificial-intelligence-and-capacity-building-delivered; “Partnership for Global Inclusivity on AI (PGIAI).” U.S. Department of State. https://www.state.gov/advancing-sustainable-development-through-safe-secure-and-trustworthy-ai/; https://www.state.gov/advancing-sustainable-development-through-safe-secure-and-trustworthy-ai/; “UK unites with global partners to accelerate development using AI.” Gov.UK, November 1, 2023. https://www.gov.uk/government/news/uk-unites-with-global-partners-to-accelerate-development-using-ai.
- “Global Digital Compact.” United Nations, September 22, 2024. https://www.un.org/global-digital-compact/sites/default/files/2024-09/Global%20Digital%20Compact%20-%20English_0.pdf, p. 14.
- Pinhanez, Claudio, Paulo Cavalin, Luciana Storto, Thomas Finbow, Alexander Cobbinah, Julio Nogima, Marisa Vasconcelos, Pedro Domingues, Priscila de Souza Mizukami, Nicole Grell, Majoí Gongora, and Isabel Gonçalves, “Harnessing the Power of Artificial Intelligence to Vitalize Endangered Indigenous Languages: Technologies and Experiences.” IBM Research Brazil and University of São Paulo, July 29, 2024. https://arxiv.org/pdf/2407.12620Jul.
- “No Language Left Behind.” Meta. https://ai.meta.com/research/no-language-left-behind/.
- “InkubaLM: A small language model for low-resource African languages.” Lelapa.AI, October 25, 2024. https://lelapa.ai/inkubalm-a-small-language-model-for-low-resource-african-languages/.
- Michael Nuñez, “Microsoft’s smaller AI model beats the big guys: Meet Phi-4, the efficiency king.” VultureBeat, December 12, 2024. https://venturebeat.com/ai/microsofts-smaller-ai-model-beats-the-big-guys-meet-phi-4-the-efficiency-king/; Maxwell Zeff, “OpenAI unveils GPT-4o mini, a smaller and cheaper AI model.” TechCrunch, July 18, 2024. https://techcrunch.com/2024/07/18/openai-unveils-gpt-4o-mini-a-small-ai-model-powering-chatgpt/; Aili McConnon, “DeepSeek’s reasoning AI shows power of small models, efficiently trained.” IBM, January 27, 2025. https://www.ibm.com/think/news/deepseek-r1-ai.
- “Member countries of the Hiroshima AI Process Friends Group (in alphabetical order).” Hiroshima AI Process. Accessed January 27, 2025. https://www.soumu.go.jp/hiroshimaaiprocess/en/supporters.html.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Network architecture for global AI policy
February 10, 2025