Amazon Shifts AI Chip Strategy, Focuses on Trainium to Compete with Nvidia
AWS discontinues Inferentia chip development, expands Trainium capabilities, and announces next-gen Trainium3 for 2025.
- Amazon Web Services (AWS) has halted development of its Inferentia AI chip to concentrate on its Trainium chip line for both training and inference tasks.
- AWS unveiled the Trainium2 processor, which powers new EC2 Trn2 instances and UltraServers, offering significant performance improvements for large AI workloads.
- The company announced plans for Trainium3, expected in 2025, built on TSMC's 3nm process and projected to deliver four times the performance of Trainium2 with a 40% efficiency boost.
- AWS is constructing an ExaFLOPS-class supercomputer using hundreds of thousands of Trainium2 processors, targeting advanced AI model training and competing with Nvidia's dominance in the AI chip market.
- Customers such as Apple and AI startup Anthropic are already adopting Trainium2, while AWS plans to introduce a new AI model, codenamed 'Olympus,' potentially leveraging Trainium3 capabilities.