Overview
- The agreement grants Anthropic access to as many as one million Google TPUs, bringing well over a gigawatt of AI compute online in 2026.
- Google Cloud highlighted TPUs’ price‑performance and efficiency, with CEO Thomas Kurian citing Anthropic’s years of experience training and serving models on the chips.
- Anthropic says it will continue to run workloads across Google TPUs, Amazon Trainium, and Nvidia GPUs, retaining control over model weights, pricing, and customer data.
- Amazon remains the primary training partner through Project Rainier, a multi‑data‑center supercomputer using hundreds of thousands of Trainium chips.
- Anthropic reports an annual revenue run rate approaching $7 billion and more than 300,000 business customers, as industry estimates peg 1‑gigawatt builds near $50 billion with about $35 billion for chips.