Overview
- Ironwood, Google’s seventh‑generation TPU, will be available to customers in the coming weeks for both training and low‑latency inference.
- Google claims up to 10× the performance of TPU v5p and more than 4× versus TPU v6/Trillium, targeting faster model training and serving.
- Ironwood superpods can link up to 9,216 chips over a 9.6 Tb/s inter‑chip network with access to 1.77 PB of shared HBM, with reliability boosted by optical circuit switching as part of Google’s AI Hypercomputer design.
- Anthropic plans to use up to one million Ironwood TPUs to train and serve Claude, citing efficiency and scalability gains.
- Alongside the TPU rollout, Google introduced Armv9‑based Axion instances (C4A generally available, N4A in preview, C4A Metal in preview) and raised 2025 capex guidance to about $93 billion as AI demand drives Google Cloud growth.