With the announcement in July 2024 by the Secretary General of the Organisation for Economic Co-operation and Development (OECD) that the Global Partnership on AI (GPAI) is merging with OECD’s AI policy work, a new chapter in global AI governance unfolds, with the GPAI’s run providing lessons for anyone advocating for the creation of a new AI governance body. The primary lessons are that 1) there is a big difference between a good idea propelled by political ambitions and the administrative reality of standing up a new international initiative and 2) any new body or initiative must find its niche and add value to the existing governance network. As the UN progresses with its plans for the Summit of the Future this month, complete with the Global Digital Compact and the final report from its high-level AI advisory body, it should take heed of the lessons that GPAI’s evolution provides.
GPAI: The right idea at the right time
The G7 Information and Communications Technology Ministers convened in Takamatsu, Japan in 2016, where they proposed the international governance of AI for the first time: a very prescient move significantly ahead of the current chorus. Canada, which held the G7 presidency in 2017, seized on the issue and proposed the creation of an International Panel on AI (IPAI) with the intention of creating an analogue to the International Panel on Climate Change (IPCC). This, too, was a bold international move that also resonated domestically: Canada is a leader in AI, with pioneers like Yoshua Bengio and Geoffrey Hinton considered “Godfathers of AI,” and Montreal is a hotbed of activity situated in the politically important province of Quebec.
France took over the G7 presidency in 2018, and the two leaders—Canadian Prime Minister Justin Trudeau and French President Emmanuel Macron—shared a friendship and bond between France and French-speaking Quebec. Macron, a leader who seeks to assert French influence in global affairs, has succeeded in making France into a start-up hotspot and an early AI power. A new global AI institution, led by France and Canada, fit perfectly into their collective vision. IPAI was officially proposed to leaders meeting in Biarritz, garnering support from all G7 members except the United States.
Midway through the Trump administration in 2018, U.S. relations with Canada and France were strained. Both countries saw IPAI as a check against the pro-market, free-wheeling U.S. approach. While the U.S. initially did not join the IPAI, eventually it did so in 2020 as a founding member, prompted in part by a desire to shape the development of IPAI and create a coalition that would isolate China. U.S. membership led to a name change from IPAI to GPAI because the former Trump administration did not want any association with the IPCC and climate change. The U.S. supported hosting GPAI at the OECD to ensure close coordination with OECD’s work on AI and suggested placing the 2019 OECD AI Principles as its foundation.
To great acclaim, GPAI was launched in 2020 with 15 founding members. It was the right idea at the right time.
Growing pains and lessons
Now, four years later, a major mid-course correction is underway. Why the shift, and what lessons does it hold for future AI governance?
Organizational architecture matters…
While the two core founding countries, Canada and France, welcomed the legal and administrative convenience provided by the OECD, what they had really envisioned was an independent institution led by them. They designed a structure for GPAI that set up centers in Montreal (CEIMIA) and Paris (INRIA) to oversee the activities of the working groups organized to analyze specific issues (e.g., data governance, future of work). These centers also oversaw the convention of these working groups into a “multistakeholder expert group” that would emulate an IPCC for AI. In effect, all the substantive work was overseen and organized by two countries. This structure left no clear channel for other entities working on AI issues, such as the growing constellation of players in other countries or even the OECD itself, to contribute in a meaningful way. This led both to a lack of agency and ownership by member countries, who saw control resting with the two founders.
A similar fate could occur if the United Nations (UN) proposes to create a new UN body or seeks to divide global AI governance work across UN agencies, such as UNESCO, ITU, UNICEF, or UNDP, while assigning a bystander role to the essential work done by an expanding international AI community of international organizations, national agencies, standards bodies, industry-led groups, and nonprofit organizations.
The immediate post-World War II era was characterized by the creation of new treaty-based international organizations. This has been supplanted by “regime complexes,” which are a “collective of partially overlapping and non-hierarchical regimes.” These regime complexes have come to define 21st century global governance. For AI, they consist of a networked web of institutions that support AI governance initiatives, from the G7 or the AI Safety Institutes to the AI Summits. This network constitutes a “secretariat” that provides continuity and follow-through as the presidency of these groups rotates across countries. While this approach requires coordination and may introduce some incoherence, it is inherently stronger, more agile, and less prone to capture. GPAI is an element in this complex, as are the OECD and specialized UN agencies like ITU and UNESCO.
While informal governance mechanisms like the G7 are not inclusive of all players, these smaller nodes bring a like-mindedness that injects speed, avoids gridlock or diluted recommendations, and can act as the basis for broader agreements. For example, G7 meetings in 2016 and 2017 were the catalyst for the OECD AI Principles adopted by 44 countries in 2019, which then became the basis for the G20’s AI Principles and the conceptual framework for GPAI. Over 50 countries from six continents agreed to these principles, which provide a common vision that improves coherence and interoperability. Rather than a patchwork of norms and institutions, regime complexes are flexible, evolving, dynamic networks that accommodate diversity and encourage new ideas. This approach is especially appropriate since “[t]he world is closer to the start of its AI journey than the end.”
Three organizational lessons from GPAI for the UN can be drawn from this turn of events.
First, instead of trying to funnel this work through channels it controls, the UN should embrace a role as convener of this broader community and its various networks that extend outside the UN system. This would help to ensure buy-in and support. Second, the UN should resist the urge to build a new organization and instead use or repurpose an existing body within the existing regime complex with a proven track record. Lastly, inclusivity should not be the sole objective driving the organizational considerations where other factors like speed, agility, and capacity are also important. As discussed below, GPAI has shown that inclusivity can be acquired overtime.
…and needs to be underpinned by a stable budget….
GPAI’s structure, with separate centers in Paris and Montreal, vastly complicated the process of funding the working groups, since it is nearly impossible for foreign governments to give another government (e.g. the Canadian and French Centers) money. While it is easier for governments to fund an international organization like the OECD, which hosted the GPAI secretariat, the need for funding was not initially foreseen by all and only slowly became clear. To attract members, expectations were set that budget contributions would be minimal, in-kind, or voluntary, creating financial uncertainty. This financial instability led to a frequent turnover of staff and stymied initial momentum.
Standing up a new international initiative requires financial resources, and these resources can be scarce when countries are faced with many competing demands. The reality is that any effort to set up a new international body needs to plan on a budget of at least five million to seven million U.S. dollars per year, guaranteed for five years. While this is not a lot of money, in this zero-sum budget era, it means either redirecting it from somewhere else and fighting the battles that come with that reallocation or taking it from deep-pocketed entities (e.g., Big Tech or rich countries) and the strings that come with such contributions.
…and have a clear, distinct mission and mandate.
Perhaps what most fundamentally plagued GPAI, and which will be rectified by its merger with the OECD, was its mission. This unfortunately was interpreted differently depending on the participant. The original 2018 ambition was to create a one-stop shop for AI policy analysis that would “guide AI policy development and the responsible adoption of AI.” It would take into consideration scientific and technological advances, economic transformation, respect for human rights, the collective and society, geopolitical developments, and cultural diversity. At the G7 2019 Summit, France asserted that the IPAI/GPAI would “have a long-term role in proposing guidelines for the development of artificial intelligence for the benefit of humanity.”
In line with this, experts who joined this effort were under the impression that they would be engaged in making AI policy recommendations and be part of the international policymaking process. But governments, including from the U.S. and some large European countries, were adamant from the outset that AI policymaking and norm setting would be the job of government policymakers with formal mandates convening at places like the OECD and the European Commission. They insisted that GPAI’s job would be to help implement policy by providing practical technical tools: “from policy to practice.” This misalignment and confusion led to frustration by experts and a dearth of useful outputs, reducing its visibility.
Meanwhile, the OECD, with stakeholder technical input from its ONE AI expert group, filled the vacuum by producing practical building blocks such as an AI definition, a classification scheme for AI that underpinned risk management frameworks, a work stream on defining and tracking AI incidents, and ongoing efforts to assemble a toolkit of trustworthy tools to address AI issues like bias in training data. The OECD’s proven track record of practical accomplishments created a virtuous pull for experts who wanted to make concrete progress that had an impact.
In short, mandates matter and should be carefully constructed and communicated to be precise and targeted with clear milestones and practical targets. Any new body or initiative must, at the outset, acknowledge that global AI governance is a crowded space that benefits from considerable expertise and various networks that interact with each other. Funding should also be commensurate with these goals.
The promising future of GPAI
As GPAI now formally merges with OECD’s work on AI, the combination will be much stronger than the two individual parts. The new body will retain the GPAI brand and include at its table the OECD’s Working Party on AI Governance that supports the Committee on Digital Policy. This structure will provide a direct channel for GPAI work to inform policy recommendations and contribute to OECD soft law development. GPAI will benefit from OECD’s mixed budget of stable dues from member countries and voluntary contributions for specific projects. This combination provides stability and flexibility to move quickly and deeply.
GPAI’s Multistakeholder Expert Group will join OECD’s Network of Experts on AI. This is “multistakeholderism” at its best, where the OECD, unlike some international organizations, has had a formal mechanism for engaging its stakeholders since 2008. These participants have seats at the table, access to most documents, and an ability to directly discuss with governments. Given the technical nature of AI, a multistakeholder approach is essential. However, experts in this area are busy people, and they need to have clarity as to what role and impact they will have.
For example, the most demonstrable success of GPAI has been the impressive diplomatic achievement of nearly doubling its membership over four years—from 15 founding members that included India and Singapore to 29 countries including Argentina, Brazil and Senegal. In a deteriorating geopolitical environment where multilateralism is under fire, bringing together this diverse set of countries to promote principles-based AI is no small feat. It is a hallmark of GPAI and specifically the diplomatic vision of Canada and France.
The union will add six additional non-member countries to the OECD’s work, which will now encompass 44 countries across six continents. This is amplified further by the leading role of the OECD and GPAI will play supporting the G7’s Hiroshima Process that establishes the “the first international framework and code of conduct aimed at promoting safe, secure and trustworthy advanced AI systems.” This includes the “Friends of the Hiroshima Process,” which encompasses 53 countries including Kenya, Nigeria, and UAE. With this growth, the merger of GPAI with the OECD is diverse enough to avoid criticism that it lacks “inclusivity,” but small enough to be agile and get things done. The new entity will be a critical node in the overall network.
As attention turns towards the creation of a new body or bodies for the global governance of AI, lessons need to be learned from the evolution of GPAI, including the importance of an administrative structure well-suited to the proposed functions. Moreover, any new initiative needs to complement the existing multi-layered, networked AI governance ecosystem that has emerged. By learning from GPAI’s past, the global community can forge more effective international AI governance institutions for the future.
Commentary
A new institution for governing AI? Lessons from GPAI
September 20, 2024