Overview
- OpenAI released two models: gpt-oss-120b runs on a single Nvidia GPU while gpt-oss-20b runs on laptops with 16 GB of memory.
- Developers can download model weights from Hugging Face or GitHub and deploy them through cloud platforms including AWS and Microsoft or on-device tools like LM Studio and Ollama.
- Initial benchmark tests show gpt-oss-120b and gpt-oss-20b beating open rivals like DeepSeek’s R1 on coding and broad reasoning tasks, though they lag behind OpenAI’s proprietary o-series and exhibit higher rates of hallucination.
- OpenAI postponed the launch for extra safety reviews—filtering sensitive CBRN data and running adversarial fine-tuning—and concluded that neither model reached high-risk thresholds under its Preparedness Framework.
- The move marks a strategic shift prompted by U.S. government calls for open AI and intensifying pressure from Chinese rivals such as DeepSeek and Alibaba alongside open-model efforts from Meta and Mistral AI.