Particle.news

Download on the App Store

CoreWeave Shares Rise on Launch of AI-Native Object Storage to Feed GPUs

The launch targets data bottlenecks that throttle GPU utilization in large-scale AI training.

Overview

  • CoreWeave introduced AI Object Storage, a fully managed service built to keep training and inference pipelines supplied with data.
  • The system uses Local Object Transport Accelerator to make a single dataset instantly reachable across regions, clouds, and on‑prem environments.
  • The company says it eliminates egress, request, and tiering fees with three usage-based tiers designed for simpler budgeting.
  • CoreWeave claims performance scales with workload size, sustaining throughput to distributed GPU nodes via private interconnects, cloud peering, and ports up to 400 Gbps.
  • CRWV stock traded up about 4.3% to $145.21 after the announcement, which extends the company’s software push following moves like ServerlessRL and deals including OpenPipe and Weights & Biases.