
Nvidia signs multiyear deal to supply Meta with Blackwell, Rubin GPUs and Grace/Vera CPUs
Nvidia has agreed to a multiyear supply arrangement to deliver millions of current and forthcoming AI accelerators and data-center processors to Meta. The contract explicitly covers Nvidia's Blackwell GPUs and the roadmap's Rubin chips, and it includes standalone shipments of the Arm-based Grace line and the next-generation Vera CPUs for server workloads. Financial advisers and industry analysts estimate cumulative demand tied to this agreement could approach $50 billion, making it one of the larger single-vendor demand signals in the AI hardware market. By bundling accelerator and CPU supply in a single relationship, the pact sharpens Nvidia's integrated-hardware advantage and converts portions of roadmap intent into foreseeable volumes. Internal and partner benchmarks referenced by analysts indicate Grace can materially reduce power use — in some database workloads roughly in half — shifting infrastructure and total-cost-of-ownership calculations for hyperscalers. The deal will accelerate Meta's deployment of GPU-CPU co-designed systems across its infrastructure and dovetails with the company's broader capital plans to expand purpose-built data-center capacity.
Industry observers caution that this demand signal sits inside a more complex compute landscape where purpose-built ASICs and cloud-provider custom accelerators are also making commercial inroads. Supply-chain and capacity constraints — from packaging and substrate bottlenecks to wafer allocation and advanced-node foundry lead times — mean that translating large design wins into sustained, high-volume shipments can take years. Nvidia has been extending commercial ties downstream, and recent market moves by the company and others to anchor capacity with cloud providers and partners could help smooth delivery but also concentrate commercial influence. Geopolitical and regulatory limits on some exports and the uneven global availability of the most advanced parts are additional near-term constraints. For competitors such as AMD and Intel, the agreement raises the bar on performance, energy efficiency and integrated solutions; for buyers the pragmatic response remains diversification across GPUs, ASICs and alternative stacks depending on workload economics. Overall, the multiyear pact signals stronger, more predictable demand for Nvidia's full-stack offerings while highlighting industry-level execution and capacity risks that will determine how quickly that demand converts into deployed compute.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

Meta deepens NVIDIA tie-up to run AI inside WhatsApp
Meta committed to a multi-year purchase of NVIDIA Blackwell and Rubin GPUs to support AI capabilities in WhatsApp while adopting NVIDIA's Confidential Computing to protect data during processing. The pact also introduces standalone Grace CPUs, Vera-class server processors and Spectrum‑X networking into Meta's stack as it accelerates a major data‑center expansion; analysts peg cumulative demand from the agreement in the tens of billions, approaching $50B.

Yotta to build $2 billion AI supercluster using Nvidia Blackwell chips
Indian data‑centre operator Yotta has launched a capital program exceeding $2 billion to deploy Nvidia’s newest Blackwell GPUs and host a large DGX Cloud cluster under a multi‑year Nvidia engagement worth more than $1 billion. The cluster is slated to begin operations by August 2026 and arrives as Nvidia expands developer and venture outreach in India and New Delhi promotes a roughly $200 billion AI investment objective, amplifying demand and supply pressures for advanced accelerators and power infrastructure.




