Overview
- OpenAI reports GPT-5 shows about a 30% bias reduction versus prior models and says fewer than 0.01% of ChatGPT production replies display political bias by its measure.
- The evaluation draws on roughly 500 prompts across about 100 U.S. political and cultural topics, with variations from conservative charged to neutral to liberal charged.
- Bias is defined along five behavioral axes that target personal political expression, user escalation, asymmetric coverage, user invalidation, and political refusals.
- Analysts note the study measures conversational behavior rather than factual accuracy, characterizing the effort as reducing political sycophancy rather than proving objectivity.
- Methodological constraints include using GPT-5 Thinking to auto-grade outputs, excluding retrieval and web search responses, and findings that strongly charged liberal prompts pull the model off neutrality more than conservative ones.