Overview
- OpenAI gains immediate access to AWS infrastructure, with full deployment targeted before the end of 2026 and expansion options from 2027.
- AWS will provision hundreds of thousands of Nvidia GB200 and GB300 GPUs on Amazon EC2 UltraServer clusters in U.S. data centers.
- The capacity will support training of next‑generation models and operation of ChatGPT for hundreds of millions of weekly users.
- OpenAI will rely on Nvidia accelerators rather than AWS’s Trainium chips under this pact.
- The deal follows disclosures of roughly $1.4 trillion in broader infrastructure commitments requiring about 30 GW of power, and it coincided with stock gains for Amazon and Nvidia.