Overview
- CoreWeave introduced an AI-native object storage service that uses a Local Object Transport Accelerator to make datasets instantly reachable across regions, clouds, and on‑prem without egress, request, or tiering fees.
- The storage platform targets more than 75% lower costs for typical AI workloads and sustains high throughput to distributed GPU nodes using private interconnects, cloud peering, and ports up to 400 Gbps.
- The company publicly launched Serverless RL, a fully managed reinforcement learning service built with Weights & Biases and OpenPipe that scales across dozens of GPUs and requires only a W&B account to start.
- CoreWeave says Serverless RL delivers about 1.4× faster training at roughly 40% lower cost versus local H100 setups by multiplexing runs and charging only for incremental tokens generated.
- Shares rose on heavy trading after the announcements and a partnership with Nvidia‑backed Poolside to build a large AI data center, even as filings show recent insider sales exceeding 28.6 million shares, including 1.11 million by director Jack D. Cogen.