
OpenAI teams with Tata to build large-scale AI data centres in India
OpenAI and Tata launch large AI data centre project
A new alliance between OpenAI and the Tata Group will fund and build sizeable compute facilities in India aimed at hosting high‑scale AI workloads. The initial build is planned at 100 MW of capacity, with a roadmap that could increase capacity to 1 GW over time.
Tata Consultancy Services will lead the technical delivery and integration work, positioning its services to embed advanced models into enterprise customers’ operations. TCS’s emerging role in high‑density rack integration — visible in contemporary collaborations with hardware vendors that package validated rack designs and on‑site systems integration — is likely to accelerate installation cadence and reduce deployment risk for the first phases.
Rather than a single monolithic campus, the broader 1 GW target appears commercially realistic as a phased, multi‑site program that can be sited across Tata’s existing campuses and partner locations. That delivery model mirrors other recent multi‑site approaches in India that aggregate capacity across several campuses to manage grid, permitting and land constraints while enabling incremental service roll‑outs to enterprise and third‑party clients.
Financial scale is substantial: a facility at the 1 GW target carries an industry cost range in the high single‑digit to low double‑digit billions of dollars. Practical execution will hinge on constrained accelerator supply, high‑bandwidth memory availability, and the architecture choices for racks and interconnects — factors that influence how much of the build is optimised for training versus inference workloads.
Energy provisioning, phased capacity expansion and integration with Tata’s software and systems services are core implementation themes. The partnership will need to coordinate OpenAI’s model and platform requirements with site‑level realities such as substations, transmission upgrades, cooling design and municipal permitting — issues that have extended timetables for similar projects in the region.
- Planned first phase: 100 MW build to host model training and inference.
- Expansion possibility: up to 1 GW, implying a multi‑billion dollar investment executed across phases and likely across multiple sites.
- TCS role: deliver technical construction, rack integration, and enterprise roll‑outs leveraging systems‑integration experience.
The announcement also lands against a broader national push to attract heavy AI‑linked investment and follows other industry moves — including vendor‑led rack reference designs and operator commitments to GPU stacks — that collectively shape procurement, supply and timing. For customers, onshore compute promises lower latency and easier compliance; for suppliers, it creates obvious demand signals for chips, racks, power and cooling capacity.
However, the path from headline capacity to operational clusters faces familiar bottlenecks: multi‑year GPU allocation cycles, permitting and grid upgrades, and the need for anchored commercial commitments to derisk large builds. Those constraints suggest meaningful portions of planned capacity could take 24–36 months to come online depending on site selection and component availability.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

AMD deepens India push with TCS to deploy Helios rack-scale AI infrastructure
AMD and Tata Consultancy Services will roll out AMD’s Helios rack reference design across India in a partnership that packages AMD’s hardware stack with TCS’s local systems‑integration skills, targeting up to 200 megawatts of aggregated AI compute capacity. The program shortens procurement-to-live timelines but faces the same execution risks seen in other large-scale AI builds — municipal permitting, transmission and substation upgrades, chip and packaging supply limits, and the potential for idle capacity if build‑out outpaces verified demand — which could stretch deliveries into a 24–36 month window.





