Overview
- The gpt-oss-120b and gpt-oss-20b models are now publicly available on Hugging Face under an Apache 2.0 license.
- gpt-oss-120b matches proprietary o4-mini’s reasoning benchmarks while running efficiently on a single 80 GB GPU.
- gpt-oss-20b delivers o3-mini–level performance and supports on-device inference with just 16 GB of memory.
- OpenAI applied adversarial fine-tuning on worst-case biology and cybersecurity scenarios and launched a $500,000 red teaming challenge to further evaluate risks.
- Microsoft integrated the models into Azure AI Foundry and Windows AI Foundry for scalable cloud and local deployments.