Particle.news

Download on the App Store

Nvidia and OpenAI to Co-Build 10GW of AI Data Centers, With First Phase Targeted for Late 2026

The pact highlights how tight GPU supply now dictates rollout plans and purchasing strategies.

Overview

  • Both companies issued a joint statement outlining a strategic partnership to deploy at least 10 gigawatts of AI infrastructure using millions of Nvidia GPUs, with the initial phase slated to run on the Rubin platform in the second half of 2026.
  • Nvidia said it intends to invest up to $100 billion in OpenAI tied to each gigawatt deployed, with terms to be finalized following the signed letter of intent.
  • CEO Jensen Huang told customers the OpenAI deal will not alter Nvidia’s supply priorities, even as industry watchers note it could push rivals to accelerate in-house chips or weigh AMD alternatives.
  • Sam Altman signaled new compute-intensive OpenAI products in the coming weeks, with some features initially limited to Pro subscribers or carrying extra fees due to high operating costs.
  • Separately in China, Hangdong’s ‘智能山海’ model cleared national service registration in July and is moving to enterprise commercialization, touting patented ‘editable chain-of-thought’ agents designed to cut compute needs and support domain-specific deployments.