Particle.news
Download on the App Store

Google Brings Ironwood TPU to General Availability in Full-Stack AI Push With New Axion Instances

The launch positions Google to court massive AI training and inference deals through a vertically integrated stack built on Axion CPUs with ultra‑large TPU pods.

Overview

  • Ironwood, Google’s seventh‑generation TPU, will be available to customers in the coming weeks and is built for large‑scale model training, reinforcement learning, and low‑latency inference.
  • Ironwood pods scale to 9,216 chips with 9.6 Tbps inter‑chip bandwidth and access to shared high‑bandwidth memory, using optical circuit switching and torus fabrics for reliability and flexible sizing.
  • Google cites performance gains of roughly 10× over TPU v5 and about 4× over TPU v6/Trillium, positioning Ironwood for both training throughput and high‑volume serving.
  • Alongside the TPU rollout, Google introduced Axion Arm‑based instances, with C4A generally available and N4A and C4A Metal in preview to support general workloads and specialized deployments.
  • Anthropic plans to use up to one million TPUs for Claude as Google targets Nvidia’s dominance in AI infrastructure and lifts 2025 capex guidance to about $93 billion to meet demand.