Particle.news

AI Data-Center Chip Race Sharpened by AMD’s Momentum and Nvidia’s Valuation Dip

Massive hyperscaler spending plans keep demand intact, shifting investor focus to execution of reported supply arrangements and near-term product ramps.

Overview

  • Nvidia maintains an estimated ~90% share in AI training GPUs anchored by its CUDA software ecosystem, though its stock’s forward valuation recently fell after a report of stalled OpenAI investment plans that the CEO disputed.
  • AMD reported robust Q4 results with revenue up 34% to $10.3 billion, data-center sales up 39% to $5.4 billion, and stronger profitability and cash flow driven by EPYC server CPUs and a rapid ramp of Instinct MI350 accelerators.
  • Analysts highlight sustained AI infrastructure demand as hyperscalers plan extraordinarily large capex, with recent reports citing commitments that reach into the hundreds of billions of dollars for 2026 and as high as $700 billion this year across five companies.
  • Reports describe a multi-year AMDOpenAI arrangement for roughly 6 gigawatts of GPUs potentially worth about $200 billion, including a 10% OpenAI equity stake tied to deliveries, though key terms remain unconfirmed.
  • Broadcom’s custom AI chip efforts continue to scale through ASIC and TPU work with major customers, with reporting that Anthropic placed a $21 billion TPU order to be fulfilled through Broadcom.