Particle.news
Download on the App Store

OpenAI Releases GPT-5.1-Codex-Max for Long-Running, Agentic Coding

OpenAI highlights multi-window compaction for sustained tasks with faster, more token‑efficient execution.

Overview

  • The new coding-focused model operates across multiple context windows using compaction, enabling work over millions of tokens and sustained runs for hours.
  • OpenAI reports 77.9% on SWE-Bench Verified and 58.1% on TerminalBench at high settings, reflecting gains over GPT-5.1-Codex in its internal tests.
  • In OpenAI’s examples, Codex-Max completes real-world engineering tasks using about 30% fewer thinking tokens and runs 27%–42% faster with fewer tool calls and lines of code.
  • The model is available now in Codex via the CLI, IDE extension, cloud, and code review, with rollout to ChatGPT paid plans next and API access coming soon; it replaces GPT-5.1-Codex as the recommended option for agentic coding.
  • Safety measures include sandboxed execution with network access disabled by default and monitoring for misuse, and it is the first Codex trained to operate effectively in Windows environments with improved PowerShell use.