Overview
- Orders begin October 15 on NVIDIA.com with global OEM variants from Acer, ASUS, Dell, Gigabyte, HP, Lenovo and MSI, plus U.S. retail through Micro Center.
- Built on the GB10 Grace Blackwell Superchip, the system delivers up to 1 petaflop of AI performance with 128GB unified CPU‑GPU memory and up to 4TB of NVMe storage.
- DGX OS ships with CUDA libraries, TensorRT and NIM microservices preinstalled to enable out‑of‑the‑box inference, fine‑tuning and agent workflows.
- Nvidia says the workstation can run inference on models up to about 200 billion parameters and fine‑tune up to roughly 70 billion, with ConnectX‑7 networking and NVLink‑C2C for scaling and clustering.
- CEO Jensen Huang hand‑delivered an early unit to Elon Musk at SpaceX’s Starbase as part of the launch, as early systems roll out to major companies and labs, while some reporting questions initial volume availability.