
G42 and Cerebras to deliver 8 exaflops of AI compute infrastructure in India
G42–Cerebras supercomputer lands in India
G42 has partnered with Cerebras to deploy an on‑shore installation delivering about 8 exaflops of AI processing capacity in India. The platform will be hosted and governed to meet Indian data‑residency and sovereignty requirements, with domestic governance and academic participation including Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) and the Centre for Development of Advanced Computing (C‑DAC).
The system is being targeted at universities, government laboratories and small‑to‑medium enterprises, with prioritized access intended to let large‑model training and inference operate fully within India’s legal perimeter. Operational specifics — precise service tiers, pricing, scheduling and whether offerings will be purely managed or include on‑premises options — were not disclosed at the summit, leaving questions about how broad access will be and what booking or subsidization models will be used.
The announcement came amid a broader New Delhi summit push that set an ambitious national AI investment agenda and catalyzed a near‑term market reaction: listed firms tied to data‑center equipment, power and cooling and colocation recorded an estimated combined market‑capitalization lift of roughly $4 billion over the week. That market response reflects investor belief that political signaling plus commercial commitments from anchors can accelerate procurement and capacity plans.
Complementary capacity programs were highlighted at the event — including Yotta’s large Nvidia‑backed campus plans, an OpenAI–Tata initial 100 MW project with a roadmap toward 1 GW, and AMD–TCS Helios reference‑rack collaborations targeting up to 200 MW — creating a clustered demand signal for accelerators, servers and high‑density racks.
Market participants and analysts cautioned that converting summit momentum into durable capex will require binding memoranda, land allotments, procurement or tariff adjustments and concrete supply agreements — not just announcements. Local power‑system upgrades, grid interconnects and cooling‑infrastructure suppliers are among the immediate beneficiaries if projects reach construction; they also represent near‑term bottlenecks if interconnection and permitting lag.
Global accelerator and HBM shortages, packaging and test constraints, and multi‑year allocation cycles remain primary execution risks. Those supply issues, combined with permitting and land timelines, make a phased, multi‑site rollout more likely and suggest meaningful portions of planned capacity will come online over 24–36 months.
Beyond throughput, the G42–Cerebras deployment is being positioned as a sovereignty instrument: keeping sensitive datasets and model weights inside India while leveraging foreign hardware and system integration. The cross‑jurisdictional arrangement — a UAE integrator and a U.S. silicon vendor operating under Indian governance — is increasingly the template for sovereign compute projects that balance capability with domestic control.
For researchers and smaller teams, on‑shore exascale capability shortens experiment cycles and reduces compliance friction for sensitive workloads. For markets and policymakers, the core challenge will be ensuring announced intent translates to signed contracts, supply commitments and usable capacity rather than transient sentiment that inflates valuations in the near term.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

AMD deepens India push with TCS to deploy Helios rack-scale AI infrastructure
AMD and Tata Consultancy Services will roll out AMD’s Helios rack reference design across India in a partnership that packages AMD’s hardware stack with TCS’s local systems‑integration skills, targeting up to 200 megawatts of aggregated AI compute capacity. The program shortens procurement-to-live timelines but faces the same execution risks seen in other large-scale AI builds — municipal permitting, transmission and substation upgrades, chip and packaging supply limits, and the potential for idle capacity if build‑out outpaces verified demand — which could stretch deliveries into a 24–36 month window.
Australian AI infrastructure firm wins $10B financing to accelerate data‑center buildout
Firmus Technologies closed a $10 billion private‑credit facility led by Blackstone‑backed vehicles and Coatue to underwrite a rapid roll‑out of AI‑optimized campuses in Australia. The debt package targets deployment of Nvidia accelerators and up to 1.6 gigawatts of aggregate IT power by 2028, embedding the project in a wider global wave of specialized, high‑power data‑center financing.




