Overview
- Google said its Gemini 3 Pro model was trained entirely on TPUs, and reports indicate the company is weighing broader chip access, including potential sales beyond its cloud.
- Anthropic confirmed a multibillion-dollar agreement for access to up to one million TPUs to develop Claude models within Google data centers.
- Meta is reportedly exploring deals to spend billions on TPUs starting in 2027, with the possibility of running some workloads on Google-hosted TPUs sooner.
- Amazon Web Services introduced Trainium3, projecting roughly four times the performance of Trainium2 with 40% better efficiency, reinforcing multi-chip strategies across the industry.
- Nvidia remains the default AI compute platform with about 90% market share and an integrated CUDA-centric stack, following a record ~$57 billion quarter as analysts see demand outstripping supply near term.