Overview
- Nvidia released Alpamayo‑R1, an open vision‑language‑action model for self‑driving research, on GitHub and Hugging Face for non‑commercial use.
- The model combines chain‑of‑thought reasoning with path planning to produce inspectable "think‑aloud" decision traces aimed at safer handling of complex road scenarios.
- Nvidia reports that reinforcement learning post‑training delivered significant gains in AR1’s reasoning versus the pretrained model.
- Supporting resources include the AlpaSim evaluation framework, the Cosmos Cookbook, LidarGen for lidar simulation, Omniverse NuRec Fixer, Cosmos Policy, ProtoMotions3, and Physical AI Open Datasets.
- Ecosystem uptake spans Voxel51, 1X, Figure AI, Foretellix, Gatik, Oxa, PlusAI, and X‑Humanoid, while an independent Openness Index rated Nvidia’s Nemotron family among the most open.