Overview
- The open-source Qwen3-235B-A22B-Instruct-2507-FP8 was released on ModelScope and Hugging Face to broaden developer access to its capabilities.
- It achieved a score of 70.3 on the 2025 American Invitational Mathematics Examination, outperforming DeepSeek-V3-0324 and OpenAI’s GPT-4o-0327.
- On the MultiPL-E coding benchmark, the upgraded model earned 87.9 points, surpassing DeepSeek and OpenAI but remaining just below Anthropic’s Claude Opus 4 Non-thinking.
- The update expands the input window eightfold to 256,000 tokens and operates in non-thinking mode to deliver direct outputs without explicit reasoning steps.
- A 3-billion-parameter variant from the Qwen series will be embedded in HP’s Xiaowei Hui assistant to enhance document drafting and meeting summarization on PCs.