
NVIDIA-backed trial shows AI data centers cut power on demand
Context and chronology
Over a five-day exercise in December 2025, a London facility ran coordinated responses to simulated grid events using control software provided by Emerald AI, in partnership with NVIDIA, National Grid, Nebius and the Electric Power Research Institute. Organizers logged each event, the control signal, and workload continuity to prove that core processing stayed online while power consumption shifted, and they agreed to share anonymized data with regulators and planners. The exercise was designed both to measure discrete, short-duration responses and to test sustained trimming across predictable peak windows, building an auditable dataset intended for industry use and policy review.
Operational performance
Measured outcomes included a top-end reduction of 40% in facility draw and a concentric test where load shrank 30% within 30 seconds, demonstrating both deep and rapid curtailment modes. The site also executed longer-duration trimming, sustaining roughly 10% lower consumption across multi-hour periods linked to predictable spikes such as sporting-event halftimes. Engineers confirmed prioritized workloads continued without observable service disruption by shifting or pausing noncritical tasks, showing that flexibility can be concentrated on fungible compute rather than wholesale service suspension. These operational results framed a proof of concept that software orchestration can deliver measurable, monetizable grid services from GPU-dense facilities.
Market and regulatory implications
Partners described the exercise as a template for larger deployments and pointed to a planned 100MW flexible AI facility in Virginia as the next application of these controls. NVIDIA argued such demand-side tools can relieve some immediate upgrade pressure on networks and accelerate connection approvals, while National Grid representatives said verified flexibility could shorten permitting timelines and lower upfront reinforcement costs. If adopted at scale, the model could create multiple revenue streams — from ancillary services to capacity markets — and alter how operators value site selection, procurement and financing.
Broader sector context and caveats
The trial's promise sits alongside larger industry frictions: rapid AI-capacity builds have produced a wave of new projects whose utilization profiles remain uneven, and some markets have already seen permitting and community resistance that delayed or reshaped developments. Industry monitors estimate roughly $64 billion of U.S. datacenter projects have experienced delays or cancellations tied to zoning and grid concerns, underscoring that operational flexibility alone does not remove planning, financing and supply-chain barriers. Hardware and supply constraints — in packaging, test and specialized accelerator production — plus concentrated financing exposure mean that expected efficiency gains and flexible fleets may arrive slower or less uniformly than vendors claim.
Siting, system-operation trade-offs and policy signals
System operators have stressed a complementary point: very large, mostly inflexible loads complicate balancing and can raise costs for all consumers, prompting suggestions to site the largest campuses where curtailed renewable output can be absorbed and to use locational signals or conditional connection agreements. In practice, that could put new commercial pressure on operators to accept stricter interconnection terms, invest in on-site storage, or contract time‑aligned renewables to qualify for connection or favorable tariffs. The London trial shows what operators can offer in response, but realizing the full value requires regulatory acceptance of demand‑side performance, clear measurement standards, and contractual pathways to monetize curtailements.
Implications for markets and timelines
If a meaningful share of GPU capacity becomes reliably dispatchable, intraday price volatility and some peak capacity procurement needs would likely fall, pressuring peaker-plant economics and shifting investment toward orchestration and software-defined flexibility. However, this transition depends on aligning hardware supply, verified operational performance, and public‑sector rules: without that alignment the financial benefits may be uneven and developers could still face long permitting and underwriting timelines. The London results are a necessary proof point, but not by themselves a guarantee of rapid, systemwide transformation.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Nvidia CEO Argues AI Expansion Will Cut Energy Costs Over Time
Nvidia’s CEO says the current surge in AI compute will raise electricity use in the near term but argues that hardware, software and grid-level innovations will lower per-unit energy and compute costs over time. The claim hinges on sustained investment, faster deployment of efficient accelerators, and coordinated grid upgrades amid risks from permitting, supply‑chain constraints and uneven demand.

Global AI datacenter boom risks oversupply and wasted capacity
Rapid expansion of GPU‑heavy datacenter capacity for generative AI is outpacing measurable production demand and colliding with local permitting, financing and grid constraints. Absent tighter demand validation, better utilization mechanisms and coordinated grid planning, the sector faces lower returns, schedule risk and heightened public pushback.

Trump's Rate Payer Protection Pledge forces techs to fund data-center power
At the State of the Union President Trump unveiled a voluntary "Rate Payer Protection Pledge" asking hyperscalers to underwrite incremental electricity and grid upgrade costs tied to AI data centers. The White House paired federal land siting and a proposed ~$15 billion PJM-backed auction with public pressure, prompting mixed industry reactions, PJM pushback, and renewed debate over voluntary versus binding cost-allocation rules.

Nvidia deepens India push with VC ties, cloud partners and data‑center support
Nvidia has stepped up engagement in India by partnering with local venture funds, regional cloud and systems providers, and making model and developer tooling available to thousands of startups — moves meant to accelerate India‑specific AI products while anchoring demand for Nvidia hardware. Those commercial ties sit alongside New Delhi’s $200 billion AI investment push and large private data‑center commitments, sharpening near‑term demand for GPUs but raising vendor‑concentration and infrastructure risks.

AI data centers push U.S. electricity costs higher, Goldman projects
Goldman Sachs warns that rapid expansion of AI-focused data centers is a major contributor to recent and projected electricity demand growth, driving notable wholesale and retail power price increases through 2027 and easing in 2028. The pressure is uneven: concentrated buildouts have spurred local political pushback and roughly $64 billion of delayed projects, raising financing and underutilization risks that will shape who ultimately bears higher bills.

EcoDataCenter and Neoclouds Accelerate Nordic AI Compute Buildout
Nordic developers and GPU-focused neoclouds are converting greenfield and industrial sites into large, power-dense AI campuses, driven by abundant renewables and the need for contiguous capacity. At the same time, governance, energy-asset ownership by hyperscalers, and utilization and permitting risks are reshaping where—and how—Europe’s AI compute footprint will concretely land.

Anthropic to Underwrite Grid Upgrades for Its Data Centers to Limit Local Power‑Bill Pressure
Anthropic says it will finance utility-side upgrades and add generation capacity for its data‑center projects to avoid shifting those infrastructure costs onto local ratepayers. The company will also fund efficiency research, grid‑optimization tools and community engagement while joining a broader industry shift by hyperscalers to internalize upfront electrification costs.

Baker Hughes Raises Data‑Center Order Target to $3 Billion as AI Drives Power Needs
Baker Hughes has lifted its target for data-center related orders to $3 billion, citing sharply higher demand for power equipment from AI-driven compute growth. The move signals growing commercial opportunity for industrial power suppliers and raises questions about supply chain capacity and utility coordination as hyperscalers expand server farms.