Overview
- Qualcomm announced the AI200 and AI250 accelerators alongside liquid‑cooled, rack‑scale systems that can be purchased as chips, server components, or complete configurations, including options with Qualcomm CPUs.
- The roadmap calls for AI200 availability in 2026 and AI250 in 2027, with a third part planned for 2028, as Qualcomm sets an annual release cadence.
- Qualcomm says the designs focus on running trained models and aim to lower total cost of ownership through energy efficiency and a new memory architecture.
- Built on Qualcomm’s Hexagon neural processing unit scaled from its smartphone and Windows PC platforms, the cards are claimed to support up to 768 GB of memory, with AI250 offering 10× the memory bandwidth of AI200.
- Nvidia dominates the AI accelerator market with reports of more than 90% share as major buyers explore alternatives such as AMD, and Qualcomm’s stock closed up 11.09% after the announcement.