Particle.news

Download on the App Store

CoreWeave Debuts Nvidia Blackwell Ultra AI Superchips in Cloud

Equipping the cloud specialist with Dell’s U.S.-assembled liquid-cooled GB300 NVL72 racks to power exascale AI workloads boosts its shares by almost 9%.

blank

Overview

  • CoreWeave is the first cloud provider to install Dell’s GB300 NVL72 racks, featuring 72 Nvidia Blackwell Ultra GPUs and 36 Grace CPUs, at its Switch-hosted data center.
  • Each liquid-cooled GB300 NVL72 rack delivers 1.1 exaFLOPS of dense FP4 inference and 0.36 exaFLOPS of FP8 training power, backed by 20 terabytes of HBM3E memory and 40 terabytes of system RAM for 50 times more AI content generation than previous Blackwell chips.
  • Dell assembled and tested the rack-scale systems in the United States, leveraging Nvidia’s Quantum-X800 InfiniBand switches and ConnectX-8 SuperNICs for high-speed connectivity.
  • CoreWeave’s shares jumped nearly 9% on the announcement while Dell’s and Nvidia’s stocks rose about 1.4% and 1.3% respectively.
  • The deployment expands CoreWeave’s capacity to support advanced large language model training, AI reasoning workloads and real-time inference for enterprise customers.