Overview
- Journalists from 22 public‑service media in 18 countries assessed roughly 3,000 anonymised answers from ChatGPT, Copilot, Gemini and Perplexity across 14 languages.
- Overall, 45% of responses had a significant flaw, with 31% showing serious sourcing errors and 20% containing major inaccuracies or outdated facts.
- Google’s Gemini had significant issues in about three‑quarters of its answers, largely due to pervasive misattribution and missing or incorrect citations.
- Concrete failures included outdated claims such as naming Pope Francis as the sitting pontiff months after his death and incorrect statements about disposable vape laws.
- The report debuts a News Integrity in AI Assistants Toolkit and the Facts In: Facts Out campaign, and calls for regulators to enforce information‑integrity rules and support independent, ongoing monitoring as AI use for news rises (7% overall, 15% under‑25s).