Intel Reconsiders Offering 18A Node to External Customers
Context and chronology
Intel's leadership signaled a change of course for its next-generation manufacturing node, as the company now weighs selling capacity for 18A to outside customers rather than keeping it exclusively for internal use. The market reacted quickly; shares of INTC:US surged roughly six percent during the session, reflecting investor appetite for new revenue pathways. Chief Financial Officer David Zinsner framed the shift as the product of improving process performance, and mentioned the company is reassessing earlier restrictions on external access. Mr. Zinsner's comments followed months of internal debate about whether to position 18A as a proprietary advantage or a sellable foundry capability.
Technical progress remains uneven: a constrained share of wafers have met customer-grade specs, but Intel reports month-on-month gains in good-die rates, reducing the gap between pilot and production yields. That convergence matters because defect density directly drives per-unit cost and margin recovery; without reliable yields, external contracts would erode profitability. Customers including large systems houses and fabless designers have tested elements of the node, which shortens onboarding time if Intel commits to commercial supply. Mr. Tan's earlier playbook emphasized using 14A as the primary external offering while keeping 18A internal; this statement signals a tactical pivot from that posture.
Strategically, opening 18A to third parties would insert Intel into a crowded, high-stakes foundry market dominated by a few incumbents and steep economies of scale. If Intel converts idle capacity into contracted wafer volume it can monetize fixed-cost assets and smooth capital intensity, but the company must underwrite ramp risk until yields match customer tolerance. Incumbent foundries will face immediate pricing pressure for premium, leading-edge slots if Intel offers competitive lead times or bundled services. For large chip buyers, an additional supplier on advanced nodes increases negotiating leverage and could shift multi-year procurement commitments.
Second-order consequences extend beyond wafer tables: EDA vendors, IP licensors, and assembly-test partners stand to gain incremental demand for node-specific signoffs and design ports, while equipment suppliers could see reorder cycles compress. If Intel wins anchor customers, design houses will reallocate engineering resources to node optimization, changing the cadence of tapeouts across the ecosystem. Regulatory, IP, and geopolitical constraints will still shape which customers are eligible for service, constraining the addressable market despite technical readiness. Mr. Tan's personnel and capital decisions over the past year laid the groundwork for this option; the company can flip strategy faster because earlier restructuring reduced legacy overhead.
The risk profile is explicit: premature commercial supply at subpar yields would pressure margins and customer relationships, while delaying access cedes share to incumbents who benefit from scale. Technical integration hurdles — process maturity, design rule stability, and long-term reliability data — remain non-trivial and will determine how quickly Intel can convert interest into binding contracts. Near-term signals to monitor are customer letters of intent, multiyear capacity bookings, and month-over-month yield improvements; those will reveal whether this is a tactical negotiation lever or a permanent business-line expansion. For corporate strategy, the announcement converts a latent option into a visible strategic choice that will reconfigure competitive dynamics if executed cleanly.
Read Our Expert Analysis
Create an account or login for free to unlock our expert analysis and key takeaways for this development.
By continuing, you agree to receive marketing communications and our weekly newsletter. You can opt-out at any time.
Recommended for you

China’s AI Hardware Sector Pulls Ahead of Big Internet Players in Growth Prospects
Analysts now expect Chinese makers of AI accelerators and related infrastructure to outpace domestic internet platforms in near‑term growth forecasts, driven by confirmed demand from cloud buyers and OEM‑level partnerships. Recent market signals — including a high‑profile device‑maker tie‑up with a major cloud player and foundries’ plans to lift capex and add North American capacity — reinforce a multiyear hardware build cycle while highlighting supply‑chain and execution risks.
Nvidia’s Portfolio Pivot: Major Stakes in Intel, Synopsys and Nokia
Nvidia reshaped its disclosed equity book in Q4, initiating a 214.8M‑share Intel position and material stakes in Synopsys and Nokia while trimming relative exposure to CoreWeave and fully exiting Arm. The moves include a parallel $2.0B structured infusion into CoreWeave and an Arm share sale, signaling Nvidia is converting public capital into commercial leverage across CPUs, EDA and networking to secure capacity and roadmap influence for large‑scale AI deployments.

TSMC wagers on sustained AI demand after blowout quarter and major capex ramp
Taiwan Semiconductor reported a blockbuster quarter with sharply higher profit and revenue, and is committing to a substantial increase in 2026 capital spending driven by cloud and AI demand. The company cites direct validation from large cloud customers and is accelerating U.S. expansion amid a tariff reduction and a broader Taiwanese investment pledge in the United States.
Intel Teams with SambaNova to Adopt SN50 and Invest $350M
Intel is committing to a strategic technology and funding pact with SambaNova that includes adoption of the SN50 stack and participation in a $350M financing round. The move accelerates Intel’s push into heterogeneous AI infrastructure and gives SambaNova greater go-to-market reach with enterprise clusters.

Nvidia pushes data‑center CPUs into the mainstream
Nvidia is reframing high‑performance CPUs as strategic elements of AI stacks, backing the argument with product designs and commercial commitments that include standalone CPU shipments to major buyers. The shift strengthens hyperscaler procurement leverage and could materially reallocate compute spend toward CPUs for specific inference and agentic workloads, but conversion to deployed capacity faces supply‑chain and geopolitical frictions.

OpenAI’s Cerebras Pact Reorders AI chip leverage
OpenAI agreed commercial access to Cerebras silicon, creating a new procurement axis that reduces single-vendor dependence and accelerates hardware diversification for large model training. Anthropic’s parallel interest in Chinese accelerator capabilities signals that semiconductor access is now both a commercial battleground and a statecraft issue.

Surging ASML orders point to sustained AI-driven chip demand
ASML reported €32.7 billion in net sales and a record €13 billion in new orders, signaling continued demand for advanced lithography tied to AI data‑center growth. Complementary industry signals — stronger foundry results and memory reallocation toward HBM/DRAM, plus eased export friction for some accelerators into China — reinforce that manufacturers are locking in capacity even as long lead times and upstream bottlenecks keep execution risk elevated.

Intel warns memory shortage will persist through 2028
Intel’s CEO says global memory shortages will likely last until 2028, and rising AI-driven demand is already provoking supplier reallocations that squeeze consumer and midrange products. The combination of prolonged tightness and targeted wafer starts for high‑performance DRAM and HBM will keep prices elevated and complicate procurement for OEMs, cloud operators and smaller system integrators.