Particle.news
Download on the App Store

OpenAI Releases GPT-5.2-Codex With Context Compression, Rolling Out to Paid ChatGPT

Independent checks remain limited, leaving the model’s real‑world advantage unproven.

Overview

  • OpenAI’s new agent-focused coding model is live for paid ChatGPT users and integrated into Codex CLI, IDE extensions, cloud tooling and code review flows, with API access slated to open in the coming weeks.
  • GPT-5.2-Codex introduces native context compression to cut token and reasoning costs on long-range code tasks, improves Windows 10/11 native reliability, and adds stronger understanding of screenshots, technical diagrams and UI designs.
  • OpenAI reports higher scores on coding benchmarks, including 56.4% on SWE-Bench Pro and 64.0% on Terminal-Bench 2.0, though third-party verification remains limited.
  • To manage dual-use risks, OpenAI says the model has not reached its internal high-risk readiness level and is launching a Trusted Access Pilot for vetted security experts, citing a Privy case where a prior Codex model helped probe React Server Components vulnerabilities.
  • Same-day release of Google’s Gemini 3 Flash prompted early user comparisons, with one social test finding Gemini faster and flagging more issues in a 50-file vulnerability review, while separate reporting questioned broader ChatGPT-5.2 ‘human expert’ claims after basic generation errors.