Particle.news

Download on the App Store

OpenAI Says GPT-5 Models Cut Measured Political Bias by About 30% in Internal Tests

The company published a five‑axis evaluation framework, inviting outside researchers to validate the results.

Overview

  • OpenAI stress-tested GPT-5 Instant and GPT-5 Thinking on roughly 500 prompts across 100 political and cultural topics, each written from five ideological framings.
  • Responses were scored by a separate AI grader from 0 (neutral) to 1 (highly biased) using criteria covering user invalidation, escalation, personal political expression, asymmetric coverage, and unwarranted refusals.
  • Analysis of real-world usage found fewer than 0.01% of ChatGPT replies showing signs of political bias, which OpenAI characterized as rare and low severity.
  • The report notes that moderate bias can still surface on emotionally charged prompts, with the strongest pull observed on liberal‑charged framings.
  • The work, led by Joanne Jang’s Model Behavior division and detailed publicly by researcher Natalie Staudacher, follows OpenAI’s developer conference as AI neutrality faces heightened policy scrutiny.