Overview
- OpenAI will design the processors while Broadcom develops and manufactures them and supplies networking gear for racks installed in OpenAI or cloud partner facilities.
- The companies plan to start deploying server racks in the second half of 2026 with implementation slated to finish by the end of 2029.
- The buildout targets the equivalent of 10 gigawatts of AI data center capacity, with industry estimates putting roughly $35 billion in chips per gigawatt.
- The agreement includes no equity or investment component and sits alongside OpenAI’s separate arrangements with Nvidia, which outlined up to $100 billion in support for at least 10 GW, and AMD for 6 GW.
- Broadcom shares rose as much as 11% after the announcement, as OpenAI said custom chips aimed at inference would bake model learnings directly into hardware to improve efficiency.