Overview
- AWS will supply hundreds of thousands of Nvidia GPUs, including GB200 and GB300, delivered via Amazon EC2 UltraServers configured in low‑latency clusters for training and inference.
- OpenAI is already using part of the capacity, with full deployment targeted before the end of 2026 and an option to expand the arrangement in later years.
- The agreement reinforces OpenAI’s multi‑vendor approach beyond longtime partner Microsoft, adding to deals with Oracle, Google Cloud, and CoreWeave.
- Investors welcomed the announcement, with Amazon shares climbing roughly 4.9% following the news.
- Press reports place OpenAI’s broader infrastructure commitments at about $1.4 trillion as it scales development and operations of advanced AI models like ChatGPT.