Overview
- Anthropic says Opus 4.5 scored 80.9% on SWE-bench Verified for real-world coding, edging OpenAI’s GPT-5.1-Codex-Max and Google’s Gemini 3 Pro.
- The company reports the model outperformed every human candidate on its timed, two-hour performance engineering exam focused on technical skills.
- Pricing is set at $5 per million input tokens and $25 per million output tokens, a sharp reduction versus the prior Opus model’s $15/$75 rates.
- A new effort parameter lets developers trade speed and cost for depth, with Anthropic citing Sonnet-level results using about 76% fewer tokens at the medium setting.
- Opus 4.5 is available across Anthropic apps, the API, and AWS Bedrock, Google Vertex AI, and Microsoft Azure, alongside updates to Claude Code on desktop, Claude for Excel, and a broader Chrome extension, with independent validation of many claims still pending.