Overview
- OpenAI signed a seven-year, $38 billion agreement to run ChatGPT and other workloads on Amazon Web Services effective immediately.
 - AWS will provide Amazon EC2 UltraServers with hundreds of thousands of Nvidia GPUs and the ability to scale to tens of millions of CPUs.
 - The deployment clusters Nvidia GB200 and GB300 GPUs on a single low-latency network to maximize performance across interconnected systems.
 - All contracted AWS capacity is scheduled to be deployed before the end of 2026, with an option to expand beginning in 2027.
 - The pact is OpenAI’s first with AWS and reflects a multi-cloud strategy, with coverage noting continued reliance on Nvidia over AWS Trainium or Google TPUs as hyperscalers vie for AI workloads.