Executive Summary
The concept of artificial intelligence (AI) sovereignty has entered policy discussions as governments confront the strategic importance of AI infrastructure, data, and models amid rising dependence on a small number of firms and jurisdictions. This report defines AI sovereignty as a spectrum of strategies to enhance a country’s capacity to make independent decisions about critical AI infrastructure deployment, use, and adoption, rather than literal autarky. Motivations vary— from protecting national security and resilience and supporting economic competitiveness, to ensuring cultural and linguistic inclusion in model training and datasets and strengthening influence in global governance. These aims are often legitimate, but “sovereign AI” can also become a vehicle for protectionism, fragmented markets and standards, and duplicative or stranded public investment. The central finding is that full-stack AI sovereignty is structurally infeasible for almost any country because AI is a transnational stack with concentrated choke points across minerals, energy, compute hardware, networks, digital infrastructure, data assets, models, applications, and the crosscutting enablers of talent and governance. The practical alternative is “managed interdependence,” an approach that relies on strategic alliances and partnerships to reduce risks throughout the AI stack. Countries can operationalize managed interdependence by mapping dependencies by layer, prioritizing feasible interventions, diversifying suppliers and partners, and embedding interoperability and portability through technical standards, procurement, and governance. Done well, managed interdependence can strengthen resiliency and agency while preserving the benefits of open markets and cross-border collaboration.
Introduction
As artificial intelligence (AI) occupies an increasingly central role in global public policy and discourse, “AI sovereignty” has become part of many policymakers’ vocabularies. This term bundles several concepts of strategic, economic, and cultural autonomy by managing key infrastructure, data, and governance rules within jurisdictional boundaries. Its concerns stem from numerous objectives that reflect valid governmental interests as well as others that may prove counterproductive. AI rests on global foundations—transnational research collaborations, complex supply chains, information technology networks, and vast stores of data that reflect human knowledge and activity—from which no country can separate entirely. This report examines how valid aims of sovereign AI will require understanding and managing interdependencies.
The potential impact and rapid pace of AI development and diffusion have widened digital sovereignty concerns globally and given them added urgency. So too have the dominance of the United States and China in AI development and deployment and the geopolitical rivalry between these two global powers, as other countries seek to close gaps and avoid being caught in between. Ambitions around AI compute, data, and models take many forms as countries seek greater security, resilience, economic competitiveness, and cultural-linguistic inclusion through AI sovereignty strategies. With India, a leader in AI sovereignty initiatives, hosting the February 2026 AI Impact Summit, the topic will be on the international stage.
There are sound reasons for countries to seek agency over AI systems. Clearly, support for multiple languages enhances the utility of AI, providing wider access to the knowledge and benefits that AI enables. Developing or operating AI systems domestically can provide societal benefits and is often deemed essential for national security and domestic and international competition. These benefits are not guaranteed; their complexity and cost may render them infeasible or inefficient, and their performance, resiliency, and security may not equal those of international alternatives. As a result, sovereign AI systems may lead to stranded or underused investment.
Sovereign AI systems could fragment markets, slow the global development and diffusion of AI, and reduce host countries’ economic competitiveness. Such systems can become tools for digital authoritarianism within countries, eroding individual rights. Some countries pursue sovereign AI to secure influence within emerging global AI governance networks. Without coordination across borders, fragmented AI systems could reduce interoperability among AI systems. Conversely, some countries with global influence have pursued “sovereign AI strategies” to cement or extend existing dominance.
Thus, AI sovereignty presents complex trade-offs and necessitates key questions for global AI players, including the United States and China, as they seek to diffuse their AI products and for many other countries that want their own AI systems.
- How can countries capture the economic benefits of domestic AI systems while avoiding inefficient investments, underperformance, and reduced competitiveness?
- How should countries reconcile AI sovereignty with international cooperation in areas like safety and security?
- How can governments ensure that sovereign AI systems protect human rights rather than serve as instruments of digital authoritarianism?
- How can countries manage such objectives in ways that avoid fragmentation or stranded investment?
This report examines these trade-offs and how governments can manage them. It describes the aims and motivations of AI sovereignty aspirations, the geopolitical landscape in which they operate, and how various governments are responding. Then, the report proposes a policy framework that focuses on a carefully tailored assessment of advantages and vulnerabilities tied to the essential building blocks of AI—the various layers of the AI value chain and ecosystems that comprise the AI stack—and the dependencies that they present. The trade-offs call for what we describe as “managed interdependence,” reconciling state autonomy with necessary and beneficial international cooperation and coordination. The report considers how countries can navigate these trade-offs in the context of a turbulent global order.
-
Acknowledgements and disclosures
Joshua P. Meltzer contributed to this report during his tenure as a senior fellow at Brookings and a founder of the Forum for Cooperation on AI. The authors are grateful to him for his contributions during his time at Brookings, and also to Pablo Chavez, Samm Sacks, and David Shrier for their generous input to the report and events that informed it. They also thank Michelle Du, Carolina Oxenstierna, and Shreya Sampath for research assistance. We are also thankful for editing and production assistance from Antonio Saadipour, Massimiliano Colonna, and Adelle Patten of the Brookings Institution.
Amazon, Google, Meta, Microsoft, and the Taiwan Semiconductor Manufacturing Company are donors to the Brookings Institution. Amazon Web Services, Google, Meta and Microsoft are donors to CEPS. Brookings and CEPS recognize that the value they provide is in their absolute commitment to quality, independence, and impact. The findings, interpretations, and conclusions in this report are not influenced by any donation.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).