In the “AI summer” of recent years, centers of artificial intelligence (AI) policymaking have blossomed around the globe as governments, international organizations, and other groups seek to realize the technology’s promise while identifying and mitigating its accompanying risks. Since Canada became the first country to announce a national AI strategy in 2017 and then led G7 adoption of a “common vision for the future of artificial intelligence” in 2018, at least 70 countries have developed AI strategies, almost every multilateral organization also has adopted a policy statement on AI, and the Council of Europe identifies some 450 AI governance initiatives from a wide variety of stakeholders. This worldwide flurry reflects how much generative AI models and the explosive uptake of ChatGPT have captured mainstream attention.
Now, the United Nations (UN) aims to impose order on this expanding landscape. Secretary General António Guterres—a prominent voice in calling for a global body to govern perceived existential risks of emerging foundational AI models—initiated a global digital compact to be finalized alongside this September’s UN General Assembly. Last year, to explore how AI fits into this compact, he appointed international experts to a High-Level Advisory Body on AI (UNAB). In short order, this panel issued a promising interim report, which framed an approach that would be “agile, networked, flexible” and “make the most of the initiatives already underway.” A final report planned before the September summit is past due.
A draft final report leaked in July fails to adhere to the approach promised in the interim report. Beneficial parts of the draft emphasize the UN’s indispensable role in promoting access and capacity-building to ensure that the benefits of AI are distributed globally. But the draft goes off course in proposing a broad role in AI policy for the UN Secretariat as a superstructure for AI governance functions already underway in multiple channels. Involving a body of 193 nations, with such widely differing interests, in many of these functions is a poor prescription for agility and flexibility, a distraction from the critical path of broadening AI capabilities, and an invitation to accentuate geopolitical divides around AI.
The expanding universe of AI governance
The UN effort adds to spreading constellations of governance, networks with many hubs and nodes and diverse interconnections. After its 2018 initial statement, the G7 established a Global Partnership on AI (GPAI) to put responsible AI principles into practice, eventually involving international experts and 29 governments. The Organization for Economic Co-operation and Development (OECD) has been deeply involved with AI, adopting ethics principles in 2019, establishing an observatory to track national policies and AI incidents, hosting GPAI, and developing definitions now incorporated into European Union law and UN General Assembly resolutions, among other policies. Within the UN, the International Telecommunications Union has convened annual AI for Good Summits since 2017, and UNESCO adopted AI ethics recommendations in 2022. Last year, China launched “a Belt and Road AI Governance Initiative.” This year, the Council of Europe arrived at the first treaty on AI, and African Union ministers endorsed an AI strategy and African Digital Compact to position Africa for AI adoption and participation in global governance.
These broad efforts have produced more focused initiatives. A Japanese project in the G7 converged with U.S.-EU discussions and with voluntary commitments the White House secured from leading developers of foundational AI models to produce the “Hiroshima Process International Code of Conduct for Advanced AI Systems.” This code and a broader set of principles for all AI developers were endorsed by the G7 and later, on May 2, 2024, by a growing 53-nation “Hiroshima Process Friends Group.” The OECD and GPAI recently joined forces to combine resources and engage with a broader group of nations on AI work. Last October, the AI Safety Summit convened by the United Kingdom and the ensuing Seoul summit spawned “safety institutes” in the U.K., U.S., South Korea, Canada, Japan, and France, among others, to develop testing and monitoring for emerging AI models. In addition to these various intergovernmental forums, several international standards development organizations, especially the IEEE Standards Association and the International Standards Organization, have adopted numerous technical standards for AI.
There is no shortage of governance actions
As the UNAB draft report says, “there is no shortage of documents and dialogues focused on governance.” It nevertheless concludes that “a global governance deficit with respect to AI” exists. The rationale for this conclusion is thin.
The draft report dwells on the point that, of seven major international governance instruments outside the UN, just seven countries—the G7 members—are parties to all of them, while 118 are in none. However, rather than being an ominous development, it makes sense that G7 members have taken the lead; as the countries where AI has advanced the most, they face the most pressing need to act and have the power and resources to do so. In smaller, like-minded groups, they have been able to move with greater speed and achieve more concrete outcomes than a body with 193 members of very disparate interests would be able to.
In turn, the outcomes of these more focused efforts are translating into concrete effect as the EU implements AI legislation, Canada debates a law, U.S. federal agencies implement President Biden’s executive order to deploy AI with care for safety and individual rights, and a U.K.-appointed panel of international experts has produced an initial scientific report on the safety of advanced AI. Similarly, the OECD has proven its ability to make progress on AI policy. Its union with GPAI shows that the OECD’s research-based definitions and data have proven valuable in informing AI governance. Furthermore, the OECD has developed an effective track record in bringing stakeholders into policy development in contrast to the UN’s member-state-driven process.
In addition, networks and nodes of AI governance are wider and more robust than the draft report gives credit for. Its counting of countries overlooks ways that non-G7, non-OECD countries participate in current governance efforts. The G7 includes the European Union as well as various leaders from the African Union and other countries in its summits, providing indirect participation to states that are not G7 members. Together, the OECD, GPAI, the Hiroshima Friends Group, and participants in the U.K. Safety Summit and its “Bletchley Declaration” include a significant number of countries from the Global South: Argentina, Brazil, Brunei, Chile, China, Colombia, Costa Rica, India, Kenya, Laos, Mexico, Nigeria, Rwanda, the Philippines, Saudi Arabia, Singapore, Thailand, Turkey, and the UAE. All UN member states participated in adoption of UNESCO’s recommendations on AI ethics. And those 450 AI governance initiatives mentioned earlier come not only from international bodies, but also from civil society organizations, corporations, and academia.
Governance should evolve, not descend from on high
The UNAB report declares that a governance deficit exists because “the patchwork of norms and institutions is still nascent and full of gaps,” and the existing initiatives cannot be “truly global in reach and comprehensive in coverage.” This presumes that AI calls for governance on a global basis, and that such governance must be comprehensive. The report insists that “the imperative of global governance…is irrefutable,” but it does not establish why.
It is quite correct that AI has global dimensions that require international cooperation. I co-lead a small but early dialogue on global governance premised on the need for global cooperation and alignment on AI policy and development. As I wrote along with colleagues in a 2021 report, “no one country acting alone can make ethical AI pervasive, leverage the scale of resources needed to realize the full benefits of AI innovation, and ensure that advances from developing AI systems can be made available to users in all countries in an open and nondiscriminatory trading system.” However, global cooperation is not the same as global governance. And the array of collaborative frameworks and projects demonstrates that the value of cooperation around AI is widely understood, with a remarkably high level of cooperation at this early stage.
A key insight of the UNAB’s 2023 interim report was that consideration of AI governance must begin by establishing what functions governance performs and where the gaps are. While the draft report identifies clear gaps in realizing the opportunities of AI, it does not specify what functions relating to identifying and mitigating the risks or to aligning national AI policies are not already being performed. Although the report declines to recommend establishment of an international governmental body or governance of “all aspects of AI,” it does suggest several UN functions that sound like setting global rules for AI.
One proposed function is a “policy forum” of member states to “foster interoperable governance approaches,” including safety standards, rights, and norms that would “set the framework for global cooperation.” To drive soft law development, the report suggests a standards exchange that would operate not just as a resource on the bottom-up work of international standards bodies, but as a body to evaluate the standards themselves and identify where additional standards are needed. For capacity-building, it recommends a global fund used not only to expand access, but also to provide sandboxes and tools to test “governance, safety, and interoperability solutions.”
While an AI Office within the UN Secretariat may make sense to align efforts across UN bodies and conduct outreach to stakeholders, its mission should be as a facilitator rather than a policymaker.
The strength of networks
What the UNAB draft report describes as a “patchwork” is what others have called “regime complexes,” or, as Nobel Prize economist Elinor Ostrom put it, “polycentric governance.” Such networks create fluid space to build coalitions on the wide range of issues presented by AI, iterate on information and solutions, and distribute functions where the greatest capacity and energy exists. As with other networks, various nodes and hubs provide multiple pathways that protect against failure points and speed up the transmission of ideas. The rapid development and endorsement of the Hiroshima AI Process is an example of an iterative back-and-forth that can advance governance step-by-step. Far more than what is proposed in the UNAB final draft, existing systems meet the initial draft’s goal of being agile, networked, and flexible.
The diverse centers of effort involved provide distributed and iterative solutions to the complexity of AI. AI differs from other subjects which serve as areas of reference for approaches to global issues, such as nuclear power and climate change. AI is a general-purpose technology with multidimensional attributes that are only beginning to be discovered, and it operates at unprecedented scale and evolves rapidly. Unlike climate change, where the broad membership of IPCC can project from a vast accumulation of weather and climate observations across time and space, most of the data about AI is unavailable or unknown other than to a relatively small group of experts. The developing field of AI is far from ripe for centralized governance.
The UNAB is absolutely right to highlight the potential for AI to accentuate digital divides among countries and within societies. It’s evident that the winner-take-all effect of technology has contributed to widening income gaps, and the scale, speed, and versatility of AI could exacerbate these profoundly for societies with limited access to the technology or the skills to adapt it. This makes the UN’s development mission compelling and its broad membership a decisive comparative advantage on the alignment of national policies.
In the end, the UN member states will decide what goes into the Global Digital Compact. UN General Assembly resolutions on AI in March (led by the U.S.) and July (initiated by China) have featured the development mission as the key priority, and the most recent draft of the Global Digital Compact in July is more agnostic in its AI outcomes than the draft UNAB recommendations. For many member states, leveraging the benefits of AI—to improve their productivity and ensure that they are not left behind—loom larger than the safety and ethics issues that dominate many AI policy salons. This backdrop provides some hope that, rather than dabble in managing AI risks and safety or the alignment of national AI policies and laws, as proposed in the UNAB draft report, the General Assembly will focus relentlessly on the UN’s critical role in expanding access and capacity to enjoy the promise of AI.
Commentary
The good, the not-so-good, and the ugly of the UN’s blueprint for AI
August 29, 2024