
Broadcom Forecasts >$100B AI Chip Revenue; Large Orders From Anthropic, OpenAI
Broadcom pushed more aggressively into commercial AI silicon with a multi‑year demand outlook that assumes AI‑accelerator sales will surpass $100 billion by 2027. Management tied that forecast to scheduled, large‑scale deliveries measured in gigawatts: a multi‑gigawatt commitment to Anthropic (commonly reported around ≈3 GW) and a separate, over‑one‑gigawatt shipment planned for OpenAI, both concentrated in the 2027 build window. The company concurrently guided second‑quarter revenue near $22 billion — above consensus — and authorized up to $10 billion of share repurchases to shore up capital structure and signal confidence in cash generation.
Market participants responded with an immediate stock move higher pre‑market, reflecting a mix of optimistic demand read‑throughs and a renewed debate over whether headline design wins can be converted quickly into recurring volume. Supporting Broadcom’s addressable market thesis are corroborating upstream reads: ASML’s recent results showed sizable new bookings (reported around €13 billion) and strong sales that signal foundries and IDMs are committing to multi‑year equipment buys, while TSMC reported demand verification from hyperscalers and flagged a stepped‑up capex profile (management cited a roughly 30% revenue uplift scenario for 2026 in contemporaneous reporting). Those supplier signals make the capacity expansion story more plausible but are not a guarantee of immediate wafer‑level output.
Analysts offer divergent takes. Jefferies published a notably bullish scenario that links heavier hyperscaler capex to a potential Broadcom share capture in an initial Google‑scale build (the bank modeled an initial window of about six million server‑accelerator units and suggested Broadcom could take roughly 85–90% of that tranche), producing a price objective implying roughly 60–62% upside from recent levels. Other sell‑side and industry voices caution that NVIDIA’s entrenched software stack, commercial relationships and broad GPU applicability will protect much of its market share except in tightly defined, high‑volume ASIC use cases.
Critical supply‑side constraints remain: substrate availability, packaging and test throughput, advanced‑node wafer allocation and lengthy qualification cycles are repeatedly cited by the supply chain as potential bottlenecks that can slow shipment conversion. Geopolitical and regulatory dynamics — selective clearances for high‑end NVIDIA parts into certain markets and evolving U.S.–Taiwan arrangements that affect tariffs and onshore investment — further complicate near‑term access to advanced nodes and customer footprints in specific regions.
Taken together, the data point to a hybrid outcome: Broadcom has secured anchor customers and upstream commitments that materially increase the probability of scaling, but realizing the full revenue pathway requires overcoming manufacturing, integration and policy frictions. For hyperscalers the practical procurement response is likely diversification — deploying ASICs where efficiency and unit economics dominate, and retaining GPUs where software breadth, tooling and vendor neutrality remain decisive. For investors, the announcement tightens the link between hyperscaler capex intent and semiconductor valuations, but execution timing will determine whether the upside is front‑loaded or stretched across several years.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

OpenAI’s Cerebras Pact Reorders AI chip leverage
OpenAI agreed commercial access to Cerebras silicon, creating a new procurement axis that reduces single-vendor dependence and accelerates hardware diversification for large model training. Anthropic’s parallel interest in Chinese accelerator capabilities signals that semiconductor access is now both a commercial battleground and a statecraft issue.

Broadcom’s Custom Chip Momentum Raises Competitive Tension but Nvidia’s Lead Persists
Broadcom is turning internal TPU design wins and strong AI revenue into a commercial product push, drawing hyperscaler interest and a reported multibillion‑dollar order from Anthropic. Broader industry signals — rising foundry capex, selective Chinese clearances for NVIDIA H200 shipments, and chip‑vendor investments in downstream capacity — tighten supply dynamics but do not overturn Nvidia’s entrenched software and ecosystem advantages, pointing to a multi‑vendor equilibrium rather than a rapid displacement.

NVIDIA Pulls Back From OpenAI and Anthropic Investments
NVIDIA signalled it will step back from making further headline private equity placements into OpenAI and Anthropic, citing closing IPO windows and strategic ecosystem goals, but company spokespeople also emphasised that earlier memoranda were non‑binding and that Nvidia still expects to participate in ongoing financing discussions in unspecified forms. The move appears less like an absolute retreat and more like a reallocation of capital toward supply‑chain and capacity anchoring (public stakes, CoreWeave commitment) while minimising large, balance‑sheet equity exposure amid rising policy and procurement scrutiny.

Surging ASML orders point to sustained AI-driven chip demand
ASML reported €32.7 billion in net sales and a record €13 billion in new orders, signaling continued demand for advanced lithography tied to AI data‑center growth. Complementary industry signals — stronger foundry results and memory reallocation toward HBM/DRAM, plus eased export friction for some accelerators into China — reinforce that manufacturers are locking in capacity even as long lead times and upstream bottlenecks keep execution risk elevated.
Jefferies: Alphabet’s big capex plan brightens Broadcom’s AI outlook; analyst sees ~60% upside
Jefferies says Alphabet’s raised 2026 capex outlook strengthens the case for a near‑term hyperscaler buildout that expands demand for accelerators and networking silicon, prompting the firm to lift its Broadcom price target (implying ~60–62% upside). The analyst’s bottom‑up model forecasts roughly six million server‑accelerator units in an initial Google deployment window with Broadcom expected to win the majority of first volumes, but realization depends on design‑win conversion, substrate/packaging/test throughput and broader market caution about heavy upfront capex.

Broadcom ships stacked-die AI chip to Fujitsu, plans broader data-center rollout
Broadcom has begun shipping a top-to-top stacked-die AI accelerator module to Fujitsu and signals plans to deliver similar modules to large data‑center operators later this year. The move comes as Broadcom reports rapid AI revenue growth and some large design wins, but industry capacity and packaging bottlenecks — and the entrenched GPU software ecosystem — mean wider adoption will depend on supply‑chain and integration execution.

Brookfield forms Radiant through Ori acquisition to lease AI chips
Brookfield folded cloud-compute startup Ori into a new unit, Radiant, to buy and lease specialized AI accelerators to governments, hyperscalers and enterprises. The move is part of a broader, multi‑track capital-for-compute surge — ranging from asset managers underwriting GPU fleets (reported Apollo/xAI financings) to venture‑and‑asset‑manager rounds for inference ASIC startups (for example, a reported €/$250m Axelera raise backed by Innovation Industries, BlackRock and Samsung Catalyst) — that creates different risk profiles (design/manufacturing risk versus vendor/lender concentration) and elevates the importance of software, remarketing and regulatory safeguards.

Dell Projects AI Server Revenue to Double to $50B by FY2027
Dell projects AI server sales will reach about $50B by fiscal 2027 and raised full‑year revenue guidance after a record quarter; upstream supplier signals (Applied, ASML, TSMC) and hyperscaler capex plans lend credibility to the outlook, but persistent packaging, test and substrate bottlenecks — plus colo, power and cooling constraints — create meaningful timing risk for when booked demand converts into deployed racks and recognized revenue.