9:00 am EDT - 4:15 pm EDT
Past Event
9:00 am - 4:15 pm EDT
1775 Massachusetts Ave NW
Washington, DC
20036
The White House released its AI Action Plan on July 23, 2025, aiming to achieve United States dominance in artificial intelligence based on three pillars: “Accelerating innovation, building AI infrastructure, and leading in international diplomacy and security.” Along with executive orders to flesh out these goals, the plan includes steps for a coordinated effort by the U.S. government to promote “the export of full-stack American AI technology packages.” Meanwhile, other countries are also seeking to develop their own AI capacities under the banner of “sovereign AI,” and efforts continue at the international level to strengthen cooperation on AI governance, including through the UN, the Organization for Economic Cooperation and Development (OECD), and other global standards-setting bodies.
On Oct. 21, the Brookings Institution’s Forum for Cooperation on AI (FCAI) convened “The American AI Stack and the World,” a full-day conference exploring the global implications of the Trump administration’s AI Action Plan.
The first panel, “American AI in a Changing World,” opened the event by discussing different jurisdictions’ definitions and approaches to AI sovereignty. Panelists emphasized that AI sovereignty is about managing interdependence and identifying layers in the AI stack—compute, data, talent, energy—where countries can have selective control. Pablo Chavez, adjunct senior fellow with the Center for a New American Security, analyzed the global sovereign AI landscape, noting that over 100 sovereign AI projects have emerged worldwide since ChatGPT’s launch. Chavez emphasized that sovereign AI is not about complete self-sufficiency but rather creating an “AI Jenga stack” where countries identify their specific contributions to the AI ecosystem through international partnerships and managed interdependence. Paul Timmers, partner at WeltWert and professor at KU Leuven, expanded the European perspective, emphasizing that tech sovereignty goals should encompass more than AI and instead focus on broader strategic autonomy. He cautioned that U.S. rhetoric around values and dominance could strain transatlantic cooperation unless accompanied by genuine partnerships.
Marc-Étienne Ouimette, founder of Cardinal Policy, provided a middle-power perspective, describing Canada’s pragmatic approach to identifying leverage points in energy, minerals, and compute access. Samm Sacks, senior fellow at the Yale Law School Paul Tsai China Center, contextualized these issues within the U.S.-China rivalry, arguing that both sides are grappling with “managed interdependence,” regarding the tension between control and connectivity in AI supply chains. As moderator Joshua Meltzer summarized in closing, long-term success of American AI policy will depend on building trusted, reciprocal partnerships that respect global sovereignty efforts.
The second panel, “Emerging Models and Innovation Policy,” was moderated by Elham Tabassi and explored how openness can offer a successful foundation for innovation and inclusivity in exporting an American AI stack. Panelists David Cox, vice president for AI models at IBM Research, and Frank Nagle, chief economist at the Linux Foundation, discussed how openness in foundational models, weights, and hardware should be viewed as a spectrum.
Both argued that many of today’s foundation models fall short of true open-source principles, as their code and training data remain largely inaccessible. Cox emphasized that openness must extend across hardware, software, and governance to promote collaboration and reduce concentration of power, while Nagle highlighted that greater transparency accelerates accountability and innovation. They also noted that enterprises are increasingly turning to smaller, open models to avoid vendor lock-in and reduce costs, though many still overinvest in closed systems.
Later, the event platformed Global Majority AI voices and advocates with the third panel, “Meeting Global Aspirations,” moderated by Chinasa T. Okolo. Panelists included Lyantoniette Chua, co-founder of AI Safety Asia, Claudia Del Pozo, founder and director of Eon Institute, and John Kamara, founder of the AI Centre of Excellence Africa. These speakers emphasized the importance of sovereign AI initiatives centering local ownership, cultural agency, and equitable participation. Chua discussed how Asian nations are navigating between U.S. and Chinese technological influence while building regional coalitions, like AI Safety Asia, to promote cooperative governance. Del Pozo highlighted Latin America’s efforts to embed local values into AI governance despite fragmented implementation and foreign involvement in AI infrastructure. Kamara underscored Africa’s need for coordinated regional strategies, data ownership, and job creation in pursuing AI sovereignty initiatives, and the role that international organizations can play in funding capacity building across the continent.
All panelists stressed that achieving AI sovereignty in the Global Majority requires partnership. Chua called for multilateral funds and supranational platforms to empower middle powers as “norm entrepreneurs,” while Del Pozo urged regional cooperation to define shared “non-negotiables” when making strategic partnerships. Kamara advocated for democratized, open-source systems to ensure local participation in building critical infrastructure, although panelists agreed openness must be coupled with safeguards. Together, they argued that international organizations should set global baselines for rights and safety while allowing regions to define their own governance models.
Continuing this conversation on the role of international organizations, the final panel of the day found that AI sovereignty efforts necessitate rather than undermine international cooperation. Moderated by Cameron Kerry, the “Paths to Engagement and Capacity Building” panel brought together experts to discuss how global governance frameworks, standards bodies, and multilateral networks can build trust and alignment. Alexandra Reeve Givens, president and CEO of the Center for Democracy and Technology, emphasized that interoperability and human rights-based governance are key to harmonizing global AI standards, highlighting initiatives like the OECD’s Hiroshima AI Principles Reporting Framework and the AI Safety Institute Network as key to building consensus. Ursula Wynhoven, the International Telecommunication Union’s director and representative to the United Nations and head of UN affairs, highlighted the ITU’s extensive work developing AI standards and training programs to bridge the digital divide. Building on this, Howard Wachtel, senior director and head of UN and international organizations policy at Microsoft, noted how even though new UN initiatives like the AI Scientific Panel and the Global Dialogue on AI Governance are non-binding, they serve as starting points for norm-building, developing national AI frameworks, and promoting safety standards.
Together, the panels underscored that meaningful AI sovereignty efforts will hinge on shared governance and sustained international cooperation. As governments, companies, and international bodies steer the development and diffusion of the AI stack, the challenge ahead is to translate national strategies into collective benefit, ensuring that innovation is grounded in security, equity, and trust across borders.
9:00 am - 9:45 am
9:45 am - 10:05 am
10:10 am - 11:10 am
11:10 am - 11:25 am
11:25 am - 12:25 pm
Moderator
12:25 pm - 1:30 pm
1:30 pm - 1:40 pm
1:40 pm - 2:40 pm
2:40 pm - 3:00 pm
3:00 pm - 4:00 pm
Moderator
4:00 pm - 4:15 pm
Eduardo Levy Yeyati
November 25, 2025
Malihe Alikhani, Ben Harris, Sanjay Patnaik
November 25, 2025
Nicol Turner Lee, Tim Wu
November 24, 2025