Sections

Research

Global energy demands within the AI regulatory landscape

Brooke Tanner,
Brooke Tanner photo
Brooke Tanner Research Analyst
Derek Belle, Cameron F. Kerry,
Cam Kerry
Cameron F. Kerry Ann R. and Andrew H. Tisch Distinguished Visiting Fellow - Governance Studies, Center for Technology Innovation (CTI)

Nicoleta Kyosovska,
Nicoleta Kyosovska Research Assistant - Center for European Policy Studies
Andrea Renda,
Photo: Andrea Renda, Center for European Policy Studies (CEPS)
Andrea Renda Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy (GRID) - Center for European Policy Studies (CEPS)
Elham Tabassi, and Andrew W. Wyckoff

April 10, 2026


  • The dialogue examined how rapid AI-driven growth in data centers is impacting increasing electricity and water consumption worldwide.
  • Participants discussed how international governance efforts may help build out measurement, standards, and reporting tools, and explored energy policy and investment strategies for managing energy constraints in the context of hyperscaler expansion and sustainability goals.
A collage of circuit boards interwoven with aerial photographs of water treatment facilities, agricultural lands, industrial sites and mining machinery.
Sinem Görücü / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Editor's note:

This background briefing guide was distributed to participants ahead of the Forum for Cooperation on AI dialogue on Nov. 18, 2025.

Introduction

Energy consumption due to computing for artificial intelligence (AI) has become a high-profile issue with the rapid advancement of AI models and tools. In our 2022 paper discussing possibilities for global collaboration on large-scale research and development projects that can help with significant global challenges, including climate change, we noted the potential adverse impact of AI’s energy consumption. At that time, there were outlying predictions that AI could consume as much as 20% of the world’s energy by 2025. But at the time, the International Energy Agency (IEA) reported that, despite large increases in the number of data centers, energy demand had remained level because of efficiency improvements in equipment and cloud services.

That picture has since changed dramatically. Uncertainties remain as to the extent of increases in consumption due to opacities in corporate disclosures and inconsistencies in methodologies. Nevertheless, it has become clear on several fronts that advanced AI models have become a new, fast-growing source of global energy consumption. Hyperscalers—the vertically integrated cloud providers and tech giants that operate large-scale data centers with distributed computing operations—have been vocal in seeking increased generation and transmission of electricity as they rush to construct more and larger data centers that demand increasing amounts of electricity as well as water and other resources to meet needs for training models and applying them using inference. Whatever the uncertainties of their disclosures on energy usage, these reflect significant increases, and independent projections of actual and future usage with energy demand from data centers, driven primarily by AI, are projected to add hundreds of terawatt-hours (TWh) to global energy consumption by 2030.

This paper will discuss these growing energy demands and the issues they present. It will explore ways to improve understanding of AI energy consumption and its impacts on the environment locally and globally, as well as future constraints that power availability and grid capacity may place on AI compute. The discussion will ask what international policies and initiatives can help address these issues.

Energy and the hyperscaler model

Since our 2022 report, baseline estimates for global energy consumption have already far outweighed overall electricity growth. In 2024, global data center electricity consumption was approximately 415 terrawatt hours, representing about 1.5% of the world’s total electricity use. This figure has been growing at a compound annual growth rate of 12% since 2017, a rate more than four times faster than that of total global electricity consumption. By one estimate, the energy consumption of data centers could approach 1,050 TWh by 2026, which, if data centers were a country, would make them the fifth largest energy consumer in the world, between Japan and Russia.

A consensus among leading analytical bodies points to a doubling or more of this demand by 2030. The IEA, in its base case scenario, projects that global data center electricity consumption could reach 945 TWh by 2030, climbing further to 1,200 TWh by 2035. Analysis from Deloitte projects a similar trajectory, forecasting a rise to 1,065 TWh by 2030. Goldman Sachs Research forecasts a 160–165% increase in power demand (measured in capacity) by 2030 compared to 2023 levels.

About 60% of the electricity used in data centers powers the servers (computers equipped with CPUs/GPUs, memory, storage controllers, and other components for processing data, which are stored on racks). This share reaches around 75% for larger hyperscaler data centers optimized for AI workloads, where servers include chips that consume 2–4 times more watts than their conventional counterparts.

The amount of new cloud computing dedicated to AI is a subset of the overall cloud market, with estimates that AI will drive 10-15% of total cloud spend by 2030. The cloud stack for AI comprises the infrastructure layer, forecasted to account for 29% of the market, which is the computing and associated networking, frameworks, and services that organizations can use to build their own foundational AI models. The second layer is the AI tooling or the platform layer, which is where the organization builds its own AI models or custom foundational AI applications using models and datasets from other providers. It has been estimated to similarly make up 30% of the cloud market. The third layer in the stack, the AI application or software layer, is expected to have the highest share, 40% of the market. This layer typically provides so-called off-the-shelf applications to offer services, such as coding and content creation, that organizations can rapidly access with limited technical know-how.

The recent growth in demand is driven by increases in AI training and inference at the infrastructure layer. The training process for a frontier model is energy intensive, requiring thousands of high-performance chips to run continuously for weeks or months, consuming gigawatt-hours of electricity. Some estimates find that as high as 80-90% of computing power used for AI is from inference. In 2024, a single query on an advanced generative AI model like ChatGPT required an estimated 2.9 watt-hours of electricity, nearly 10 times the 0.3 watt-hours needed for a conventional Google search. Newer measurements suggest median energy per text query has fallen to 0.24-0.3 watt-hours, although this can be much higher for long reasoning or multimodal prompts. The IEA’s modeling indicates that electricity consumption from servers used for AI workloads, predominantly inference, is projected to grow by 30% annually as adoption increases. This single category of usage is expected to account for almost half of the net increase in global data center consumption between 2024 and 2030.

The impact is especially acute in the United States, which is currently the world’s largest data center market, accounting for 45% of global data center electricity consumption in 2024. The IEA estimates that data center demand for energy in the U.S. will increase by 130% by 2030. This forecast is consistent with those by other expert bodies. The U.S. Department of Energy, in a 2024 report from Lawrence Berkeley National Laboratory (LBNL), found that data centers consumed about 4.4% of total U.S. electricity in 2023 and are projected to consume between 6.7% and 12.0% by 2028. In absolute terms, the LBNL report projects an increase from 176 TWh in 2023 to a range of 325 to 580 TWh by 2028. Bloomberg Intelligence predicted growth in energy demand for AI by up to four times its current level by 2032.

The forecast is also corroborated by individual AI developers. Anthropic estimated that by 2027, training a single frontier AI model will require five gigawatts (GW) of power and projected that the U.S. AI sector alone will require 50 GW of new electric capacity by 2028 to maintain global AI leadership. To put the 50 GW figure in context, it is about twice the peak electricity demand of New York City. Former Google CEO Eric Schmidt testified before Congress that data centers will need 29 GW of additional power by 2027, and 67 more GW by 2030.

These rapid increases pose a challenge for U.S. AI development. For the past two decades, U.S. electricity consumption was essentially flat; AI is now driving that growth rate several times faster. AI companies are clamoring for gigawatts of new capacity in a few years, but current permitting processes for new power plants and high-voltage transmission lines can take a over decade in the U.S. and EU, with no guarantee of sustained growth at these elevated rates. Conversely, China added over 400 GW of new power capacity online in a single year.

The IEA notes that while a typical data center can consume as much electricity as 100,000 households, the largest next-generation campuses currently under construction will demand 20 times that amount. This sharp increase in demand changes the nature of the challenge for grid operators. Historically, despite a near tripling of data center workloads between 2015 and 2019, the sector’s power demand remained relatively flat due to significant gains in energy efficiency. The current demand surge is different not only in scale but in character. Data center demand is highly concentrated in a few geographic clusters and driven by a small number of hyperscale technology companies. While data centers may only account for 3% to 4% (some estimate up to 9%) of global electricity demand by 2030, their share of local demand can be overwhelming, reaching 42% in Frankfurt or nearly 80% in Dublin.

Additionally, the cooling systems required for these servers drive a substantial demand for water. In 2023, U.S. data centers consumed about 17 billion gallons of water, with most—84%—used for hyperscale and colocation facilities. Direct water consumption in hyperscale data centers alone is expected to consume 16 billion to 33 billion gallons annually by 2028. At the same time, training an advanced model requires less water than is used on a square mile of farmland in a year.

Strategic options for energy constraints

Supply-side measures

In the U.S., hyperscalers like Google, Meta, and Amazon were estimated to spend $364 billion on data center construction in 2025. These three companies are collectively the largest corporate buyers of renewable energy in the world. In 2024 alone, Big Tech companies accounted for 43% of all clean energy power purchase agreements (PPAs)—long-term contracts to buy electricity from a specific generation project—signed globally. PPA prices rose by an average of 35% in 2024, driven largely by this surge in procurement from large AI developers. U.S. states are also competing for this investment: Texas is set to provide over $1 billion in subsidies for data centers in 2025, and Virginia offered $732 million in 2024.

While the hyperscalers are influencing the direction of investment, the physical development and operation of many data center campuses is handled by a broader set of specialist developers and providers that build and operate campuses and lease them back to hyperscaler tenants. These operators are often backed by private-equity and infrastructure funds and financed through project-level debt, as illustrated by deals where firms, such as Apollo, KKR, and Energy Capital Partners, have acquired or funded developers to build large campuses for large technology companies. This multiownership and financing structure often involves mid-sized operators that may have less negotiating power and in-house regulatory and sustainability capacity. Such fragmentation makes it difficult to attribute responsibility for long-term energy demand, procurement, and efficiency investments.

In 2024, most (40%) of electricity used in data centers was from natural gas sources, followed by renewables (24%), nuclear power (20%), and coal (15%). Fifty-six percent of new global data center capacity between 2023 and 2035 is expected to come from renewable sources, but 64% of incremental generation in the same period is projected to come from fossil fuels due to existing fleet economies.

Several countries are looking to nuclear power as an attractive solution providing carbon-free, high-capacity, and continuously available baseload power. For example, France, where about 70% of the electricity is from nuclear power, touts the availability of this power source as an AI advantage. Data center operator Data4 recently signed a first-of-its-kind 12-year Nuclear Production Allocation Contract with the state-owned utility EDF. This deal provides Data4 with 40 MW of capacity directly from EDF’s nuclear reactors at a price tied to production costs rather than volatile wholesale market rates. The Palisades nuclear power plant in Michigan earned “operations status” in August from the U.S. Nuclear Regulatory Commission to restart. Amazon acquired a data center campus in Pennsylvania for $650 million, which is directly powered by the adjacent Susquehanna nuclear power station. Microsoft has also entered an agreement to restart part of the nuclear plant Three Mile Island in Pennsylvania to power its data centers. In parallel, NextEra Energy is hoping to restart the Duane Arnold nuclear plant through a PPA.

Across the U.S., the Department of Energy is planning to support data centers built on federal land. Additionally, President Donald Trump announced billions of dollars from Middle Eastern states and investors into sales by U.S. chip and AI companies, and Interior Secretary Doug Burgum recently signed a memorandum of understanding to expand cooperation on AI and energy with Abu Dhabi.

Demand-side measures

On the demand side, hyperscalers are modifying their loads on electricity grids to reduce costs, anticipate energy availability, and meet sustainability goals. Hyperscalers often time non-urgent, energy-intensive tasks, like training new models or background data processing, to run when renewable energy is abundant or when the grid is underutilized. By training on electricity that would otherwise be wasted due to oversupply, AI companies reduce their marginal emissions.

At one field trial in an Oracle cloud data center, an AI workload manager dynamically slowed or paused less time-sensitive jobs during a grid stress event, cutting the data center’s power draw by 25% for three hours while maintaining service quality. The system redirected inference queries to data centers in other regions that were not experiencing grid strain. Operators could route requests based on regions with cleaner energy or spare capacity. As it has done for web search, Google is employing a variety of caching techniques to reduce inference time and energy use for AI queries. A recent study by Duke University found that if data centers nationwide can limit their power use during just the top few hours of peak grid demand each year, the U.S. grid could accommodate roughly 100 GW more data center load without building new power plants.

International governance and energy

International initiatives on AI and energy

Several existing formal frameworks and standards at the international level aim to address AI’s energy consumption and environmental impact. These include frameworks from intergovernmental bodies that have been subjects of FCAI discussions:

  • In 2024, the OECD principles were revised to add language calling for “inclusive growth, sustainable development and well-being,” explicitly urging stakeholders to engage in “responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet… invigorating inclusive growth, well-being, sustainable development and environmental sustainability.” This explicit reference to environmental sustainability was added in the 2024 revisions to the original 2019 principles. The recommendation does not specify how energy for AI should be measured or constrained, but it positions environmental impacts, including energy and water use from AI compute and data centers, as part of responsible AI.
  • In turn, the G7 Hiroshima AI Process on promoting safe, secure, and trustworthy AI included an International Code of Conduct, with a supporting voluntary reporting framework operationalized by the OECD. This transparency and accountability framework included questions asking about the research and investment taken to minimize “environmental risks from AI” and maximize “environmental benefits from AI,” and work with civil society in support of the U.N.’s sustainable development goals (SDGs). The framework encourages firms to report but leaves specific metrics and methodologies largely up to the reporting organizations. Eighteen of the 20 initial responses included reference to the company’s energy or environmental and highlighted efforts to reduce AI’s carbon footprint, develop more energy-efficient systems, and utilize AI for sustainability applications.
  • The U.N.’s sustainable development goals also include environmental sustainability, especially goal seven on affordable and clean energy and goal 13 on climate action. To this end, the 2021 UNESCO Recommendation on the Ethics of AI establishes “environment and ecosystem flourishing” as one of its four core values and calls for AI technologies to be continuously assessed against their direct and indirect impacts on sustainability. It is non-binding but provides an ethical basis for integrating AI-related energy demand into national AI strategies and impact assessments.
  • The Global Digital Compact includes a commitment to “promote sustainability across the life cycle of digital technologies, including context-specific measures to increase resource efficiency and to conserve and sustainably use natural resources and that aim to ensure that digital infrastructure and equipment are sustainably designed” in ways consistent with the SDGs. Additionally, the U.N.’s Pact for the Future emphasizes that “guaranteeing access to energy and ensuring energy security is critical for achieving the Sustainable Development Goals, promoting economic development, social stability, national security and the welfare of all nations worldwide,” including efforts to establish “resilient and secure cross-border energy infrastructure” and to “increase substantially the share of renewable energy.” These instruments do not create AI-specific energy caps but link digital policy and infrastructure build-out to broader commitments on energy access, security, and decarbonization.

Standards development organizations have also turned lenses onto AI energy usage:

  • The ISO/IEC 30134 series provides internationally agreed key performance indicators (KPIs) for data center resource efficiency (covering metrics like power usage effectiveness for energy, water usage effectiveness, carbon usage effectiveness, etc.). These standards give operators and regulators common methods to quantify electricity and water consumption of data centers and to benchmark efficiency. These help track how much additional energy AI workloads are driving at facility level, even though they do not distinguish AI compute from other uses.
  • Other ISO/IEC AI standards (for example ISO/IEC 42001 for an AI management system and other ISO/IEC technical reports such as 24027, 24028, 24029-1) are referenced more often in regulatory contexts. These standards, along with existing environmental standards, guidelines, certifications, and best practices for energy-efficient data centers, may help standardize reporting around energy usage.
  • The IEEE P7100 Working Group is developing a technical standard for measuring AI’s environmental impact, including from training models and deriving inference. The standard will also differentiate AI-specific compute from general-purpose compute measurements. P7100 is one of the first attempts to develop model- and workload-level metrics for AI’s energy and environmental footprint, bridging current gaps between infrastructure-level data-center KPIs (like power usage effectiveness) and AI-specific energy accounting.
  • On the emissions accounting side, the Greenhouse Gas Protocol, developed by the World Resources Institute and World Business Council for Sustainable Development, is undergoing updates to include data center reporting. This sectoral standard guides how companies report greenhouse gas emissions, and a public consultation proposed retaining a dual-reporting approach on both location-based (where emissions occur) and market-based emissions (where renewable energy is purchased) with steps to strengthen the accuracy of market-based claims and tightening zero-carbon power credits.

Most of these frameworks remain voluntary or non-binding and lack enforceable compliance mechanisms or metrics for environmental disclosures. A notable exception is the EU AI Act, which requires providers of general-purpose AI models to report technical documentation and transparency requirements. Legal analysts argue this includes reporting known or estimated energy consumption of the model to the EU AI Office. The voluntary code of conduct also asks for the disclosure of the energy consumption for training as well as the benchmarked amount of computation used for inference (which relates to energy consumption during inference and is the only model-relevant item for its makeup). The EU’s energy efficiency directive includes KPIs for large data centers within the EU, such as total energy consumption, total water input, and total renewable energy consumption, which are used for the regular assessment of data center energy efficiency and sustainability. The directive will generate a dataset on the energy footprint of large EU facilities, which could be used to infer how AI-driven capacity additions are affecting electricity demand and grid planning.

Energy measure and disclosures

Despite greater convergence around measures of AI energy usage and the above efforts toward greater transparency and consistency, governments and civil society organizations have called for increased transparency in reporting energy usage in the aggregate, as well as categories of usage like training, inference, and cooling (which could include water use). Currently, technology companies face few, if any, requirements to disclose the energy used to train their AI models, the water consumed by their data centers, or the overall carbon footprint of their AI operations in a consistent, comparable, and verifiable manner

Corporate disclosures on AI’s environmental impact, while improving, remain inconsistent, non-standardized, and often incomplete. Methodological differences in measurement and reporting, such as focusing solely on inference while omitting training costs, using median instead of mean consumption figures, and employing “market-based” instead of “location-based” emissions calculations, make meaningful cross-model comparisons extremely difficult for regulators and consumers.  

Some AI/cloud companies voluntarily report the environmental footprint of their models, but the approaches vary. In August, Google released a technical paper on its methodology for measuring the environmental impact of inference by its Gemini models. Inference (running the model) is typically the majority of an AI model’s life cycle energy consumption. According to the analysis, a “median” text-generation prompt in Gemini apps consumes approximately 0.24 watt-hours (Wh) of energy, emits 0.03 grams of carbon dioxide equivalent (g CO₂e), and uses 0.26 milliliters (mL) of water. The paper states the per-prompt energy requires less than the amount spent watching TV for nine seconds. The methodology accounts not only for the direct power drawn by its AI accelerators but also for the energy consumed by host CPIs and DRAM—idle systems provisioned for reliability—and the full data center overhead, reflected in fleet-wide average power usage effectiveness (PUE) of 1.09 (meaning about 9% extra energy for cooling and power delivery). By accounting for idle redundancy and overhead, the reporting sets a high bar for inference measurement.

Mistral AI published a life cycle analysis for its Mistral Large 2 model, aligned with international standards ISO 14040 and 14044 to include upstream impacts such as hardware manufacturing and transportation, in addition to operational energy and water use. The analysis calculated the impact of training Mistral Large 8 and the footprint of 18 months of usage, and the impacts of a 400-token response from the AI assistant Le Chat.

The analysis found that in the first 18 months, the training and use of Mistral Large 2 generated 20.4 kilotons of CO2e, consumed 281,000 cubic meters of water, and resulted in 660 kilograms of resource depletion, measured in antimony equivalents (kg-Sb-eq). On a per-query basis, a typical 400-token response from the AI assistant Le Chat generated 1.14 grams of CO2e, consumed 45 milliliters of water, and 0.16 milligrams of Sb eq (standard unit for resource depletion).

Other companies have reported partial data in broader AI governance or sustainability reports. Microsoft’s 2025 environmental sustainability report noted that, alongside its 168% increase in energy since 2020, the company’s total emissions have grown by 23.4%, citing AI and cloud expansion as key growth-related factors. This high-level reporting does not go into greater detail on the services or facilities that account for the emissions or energy usage. Similarly, Meta’s 2025 sustainability report discusses decreases in emissions and water usage from 2021 as well as the challenges of designing sustainable data centers for AI, but it does not detail specific model energy demands.

Even in model-specific reporting, methodological differences including what is in scope (training or inference, infrastructure or AI accelerators), how averages are calculated (mean or median), and which emissions accounting rules are applied can lead to large differences in published “per-query” or “per-model” environmental scores. For researchers and policymakers interested in energy demand, this makes cross-model comparisons dependent on the underlying methodological assumptions.

When DeepSeek claimed it trained its reasoning model R1 on roughly $294,000 using 512 Nvidia H800 GPUs, media coverage framed this as evidence that competitive frontier models can be trained for a fraction of the cost of U.S. counterparts. Subsequent analysis, however, noted that the figure only covers the final reinforcement-learning stage and excludes the much more compute-intensive pre-training of the base model (DeepSeek V3), which used about 2,048 GPUs over two months, with an estimated training cost in the low single-digit millions. When those earlier stages and infrastructure costs are included, DeepSeek’s effective energy and cost per model are closer to those of other large-scale systems. At the same time, the multiple innovations in R1’s engineering, such as predicting two tokens at a time instead of one and calculating model weights with eight instead of 16 digits of precision, do appear to create significant energy savings per token and reduce inference energy for a given level of performance.

To address the divergence in reporting methodologies and the challenges in comparing environmental “scores” across leading AI models, some groups have called for minimum disclosure standards accompanied by standardized KPIs. Some efforts have emerged: Hugging Face publishes a leaderboard of “AI Energy Scores,” giving models a star rating based on their energy efficiency based on the GPU energy consumption in watt hours for 1,000 queries.

Figure 1

Regional impacts from new AI energy demand

Historically, the main consideration for data center locations has been network latency, including proximity to end-users and major internet exchange points. Today, access to and the cost of energy is another major determinant of data center location.

Surging AI demand is challenging more mature data center markets like Ireland and Northern Virginia in the United States. Utilities have struggled to build new transmission capacity fast enough to meet industry demand, leading to power shortage warnings and delays in connecting new facilities to the grid. These high-concentration areas can also have negative impacts on residents, without sufficient guardrails.

Ireland has been one of Europe’s data center hubs because of its favorable tax regime and proximity to Europe. In 2023, data centers and other large energy users accounted for 21% of Ireland’s total national electricity demand, and could potentially reach 30% by the early 2030s. In response, Ireland’s energy regulator, the Commission for Regulation of Utilities (CRU), adopted strict grid connection policies for data centers, including provisions for applicants to demonstrate on-site power generation capabilities, and flexibility to reduce demand during periods of national grid stress, shifting the grid stability burden from the public utility to private data center operators.

Grid capacity in these more mature markets is often at or near its limit, with land for new large-scale development becoming scarce and expensive and regulators imposing stricter controls. The government of the Netherlands, for example, implemented a nine-month moratorium on new hyperscale data center permits to assess their impact on the national power grid. Such constraints are leading developers to look further afield.

In Southeast Asia, hyperscaler cloud providers have injected capital in the region, including Amazon’s plans for a $6 billion investment in Malaysia by 2037 and a $5 billion investment in Thailand. Similarly, Google has plans for a $1 billion facility in Thailand, and Microsoft is in the midst of a regional expansion, pledging a $1.7 billion investment into Indonesia over the next few years. These countries have attracted large investments through greater availability of land, more competitive power costs, government support through tax incentives, and relatively easy permitting processes. Southeast Asia’s data center market is projected to more than double in value from $13.71 billion in 2024 to $30.47 billion in 2030.

China, the world’s second-largest data center market, accounts for 25% of global consumption. The Chinese government is pursuing a massive build-out of AI and data infrastructure to power its technological ambitions. This involves aggressive data center expansion, with over 500 data center infrastructure projects announced between 2023 and 2024. According to state reports, at least 150 of these data centers were running by the end of 2024.

This expansion is supported by aggressive growth in energy supply. On the one hand, China is deploying renewable energy on a large scale throughout its energy systems. Early 2025 estimates suggested that in the first half of the year, China installed 357 GW of new wind and solar capacity—an amount greater than the entire installed power capacity of India. As a result, renewables now account for 60% of China’s total installed power capacity, while coal’s share of actual generation has fallen to historic lows. On the other hand, China is also massively expanding its coal-fired power fleet, driven by concerns over energy security and grid reliability, and China still is on track to reach decade highs in new coal power capacity additions. In the first half of 2025, China commissioned 21 GW of new coal power plants, the highest amount for that period since 2016, and the total 2025 additions are estimated to exceed 80 GW. During the same period, 75 GW of new coal projects were proposed, the highest in a decade, while retirements of old plants were negligible. In the first half of 2025, global electricity generation from wind and solar exceeded coal for the first time, driven largely by efforts in China and India.

Connecting data centers to energy systems

Typically, AI hyperscalers integrate the complete AI stack, from chip design to AI model development, into their business models. Their model effectively depends on unlimited computing power but is constrained by physical energy limits. Its future effectiveness may depend on solving power availability and grid capacity bottlenecks.

One significant challenge to scaling energy for data centers is the process of connecting new data centers to the electrical grids, often not designed to handle such concentrated loads from AI development. Before a data center can use electricity, it must apply to the grid operator for an interconnection agreement, which often triggers any needed upgrades to transmission lines or substations, before connecting to the transmission network.

In many mature markets, this grid interconnection queue has become a multiyear waiting list. In established European and North American hubs, the average wait time for a new large-scale grid connection is now between seven and 10 years, with some projects facing delays of up to 13 years. A single new data center campus can necessitate the construction of entirely new substations and high-voltage transmission lines. A single hyperscale data center campus can have a power demand equivalent to that of a large industrial city or an aluminum smelter, but the development timeline is far more compressed and uncertain.

While a new data center can be designed and built in two to three years, it is effectively useless unless it can draw power from the grid. The IEA estimates that nearly 20% of planned data center projects globally could face significant delays due to these grid connection challenges. Data center developers are now actively “shop[ping] around” for locations with the fastest interconnection opportunities, with some estimates suggesting they file speculative requests for five to 10 times more capacity than they will ultimately build to secure a place in the queue.

Beyond delays, many electrical grids were designed and built decades ago for a centralized, one-way flow of power from large fossil fuel plants to distributed customers. They were not engineered to handle the massive, concentrated, two-way power flows required by data center clusters combined with large-scale renewable energy projects.

Safety margins are the buffer capacity needed to handle unexpected events like a power plant failure or a sudden spike in demand. Seven of 13 major U.S. grid regions are projected to operate below their critical safety margins by 2030, significantly increasing the risk of blackouts. The potential consequences include more frequent power outages, the implementation of rolling blackouts to manage load, and extreme price volatility during peak periods.

In deregulated electricity markets, the grid operator is responsible for approving a connection request but is not responsible for ensuring that sufficient power generation capacity is built to meet that new demand. This responsibility is left to the market, creating a potential gap between interconnected load and available supply. The challenge is not just procuring enough power but also delivering this power to specific locations in a timely manner. In some legacy data center hubs, there is simply no more available capacity on the local grid to support new large-scale developments, leading to official and de facto moratoriums. The constant electricity demand also places an ongoing strain on grid components like transformers and switchgear, which are already facing their own supply chain backlogs with wait times for critical components doubling in recent years.

Optimizing the energy system with AI tools

A hopeful counterpoint to the high energy cost of AI is that the technology itself can help improve energy and grid management. Through the application of these technologies, the AI industry has achieved gains in computational and energy efficiency. According to the IEA, the broad application of AI tools could free up to 175 GW of transmission capacity without the need to build new lines.

For example, by integrating data from meters, weather sensors, and grid equipment, experts can use AI algorithms to predict electricity demand with greater accuracy than traditional methods. Google’s DeepMind division applied AI to predict wind power output 36 hours in advance, a capability that increased the economic value of wind energy by approximately 20%. In South Africa, the utility Eskom is using AI for enhanced grid monitoring and efficiency improvements.

Additionally, scientists can use AI systems to continuously monitor the health of critical energy infrastructure and better plan for maintenance needs. This proactive approach can reduce equipment downtime by up to 50% and lower maintenance costs by 10% to 40%. Additionally, by training AI models on environmental data, these models can be used to optimize the orientation of solar panels or the pitch of wind turbine blades to maximize generation based on weather patterns or real-time grid conditions. In Uganda, the company OrxaGrid is deploying a platform that uses AI and Internet of Things (IoT) sensors to conduct predictive maintenance on local electricity grids, improving reliability in underserved areas.

In Hamburg, Germany, researchers developed a simulation of the city’s port and adjacent urban area to model the impact of smart grid technologies. The study found that using AI systems to manage virtual energy buffers and implement strategies to shave peak energy loads could reduce the amount of renewable energy overproduction required to ensure grid stability from 95% to 65%, a significant efficiency gain.

Planners are also using AI tools to design better energy systems. In Pakistan, city planners are using AI tools to optimize the development of solar energy systems, making clean power more accessible and affordable for low-income families. In parts of Africa, governments and companies are also using AI tools to optimize data centers’ energy use.

Responses to the Hiroshima AI Process reporting framework highlighted how AI can support broader sustainability goals. Fujitsu is using AI to reduce greenhouse gas emissions; Microsoft is investing in AI tools to design and test materials with greater accuracy, better manage water resources, and expedite the licensing process for carbon-free electricity, alongside its goal to add 10.5 GW of renewable energy to the electricity grid. Google notes its research on environmental risks and benefits, as well as work applying AI to the SDGs. OpenAI hosted an AI hackathon to “accelerate clean energy development.” Salesforce discussed its membership in the Coalition for Sustainable AI, working toward the SDGs, and the AI Energy Score benchmarking tool.

Companies are also using AI to make the AI products themselves even more efficient. Google, for instance, reports that between May 2024 and May 2025, it reduced the median energy consumption per Gemini prompt by a factor of 33 and the associated carbon footprint by a factor of 44. The gains are driven by more efficient model architectures, algorithms, and quantization (including Accurate Quantized Training), optimized inference and serving, custom-built tensor processing units, and optimized idling. However, these impressive per-unit efficiency gains could be overwhelmed by the growth in overall demand, an example of Jevons Paradox—an economic principle wherein an increase in the efficiency with which a resource is used tends to increase, rather than decrease, the rate of consumption of that resource. If AI becomes more efficient and less expensive to run on a per-query basis, the total number of queries could skyrocket.

While the energy efficiency of individual AI chips continues to improve, the size and complexity of AI models and the sheer volume of user queries are growing much faster. The IEA’s projection that global data center electricity use will more than double by 2030 is made in spite of anticipated efficiency gains. The historical trend of flat power consumption despite rising workloads has reversed, with efficiency gains slowing since 2020.

The evidence of this paradox is apparent even within the reporting itself. While Google reduced its data center emissions by 12% in 2024 through clean energy procurement and operational improvements, its absolute electricity consumption from data centers grew by 27% year-over-year. The company’s overall greenhouse gas emissions have risen by 51% between 2019 and 2024, likely with the expansion of AI-related services as a key contributing factor.

The crux of the question is whether the energy savings generated by the use of AI in the energy sector can ultimately offset the technology’s own rapidly growing energy consumption. The current evidence suggests a significant temporal mismatch. The energy demand from AI data centers is immediate and accelerating now, as companies are investing billions in new AI chips and facilities that will draw power in the next one to five  years. In contrast, the efficiency gains from AI in the energy sector are more incremental and diffuse and may take a decade or more to materialize at a large scale. For example, turning over the entire electric grid to be AI-optimized with smart devices and IoT could easily be a 10-to-20-year process given regulatory, infrastructure, and capital constraints.

Conclusion

While future questions on energy demand, consumption, and grid connectivity remain uncertain, it is clear that data center demands for energy driven by AI training and applications is causing adverse economic impacts in regions providing electricity and water to data centers. More companies are disclosing their energy and water usage, but these reports are not standardized, and even international frameworks often lack interoperability in responses. Greater transparency and consistency are essential to understanding AI’s energy footprint and managing its long-term demand.

Discussion questions

  • How should governments and companies approach private-public partnerships, funding, and implementing energy projects associated with AI?
    • How can these projects drive large-scale economic transformation in certain regions?
    • What are social and economic factors to consider?
  • How do different energy sources (renewables, coal, nuclear, etc.) affect our assessment of data centers in terms of sustainability, efficiency, and utility? What are the tradeoffs of each source of energy?
  • How are your governments and organizations currently assessing and addressing the energy demands associated with AI infrastructure and development? Where have you seen points of disagreement among stakeholders?
  • What responsibilities around governing AI and energy use should fall to national governments? And where can global organizations play a role in setting goals, standards, and norms?
  • What transparency and reporting requirements would assist governments and companies in assessing the environmental benefits, risks, and impacts of AI systems across their life cycles? What challenges arise when evaluating claims around AI’s carbon footprint?
  • What are promising use cases of AI in cutting carbon emissions and improving environmental sustainability?

Authors

  • Acknowledgements and disclosures

    Amazon, Google, Meta, and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).