Particle.news

Download on the App Store

Study Finds 88% of AI Health Responses False as One-Third Would Consult AI Doctors

Researchers alongside regulators demand urgent safety measures following revelations that most AI health advice is false

A stock image showing a sick person using a smartphone.
Image
Image
Image

Overview

  • A study in the Annals of Internal Medicine found five leading AI chatbots produced 88% false health responses, with four models entirely inaccurate and one wrong in 40% of cases.
  • Investigators showed that developer tools and public platforms like the OpenAI GPT Store can be used to reprogram chatbots into real-time health disinformation engines.
  • A STADA survey of 27,000 people across 22 European countries revealed 39% would consider consulting an AI doctor for its accessibility, convenience and perceived neutrality.
  • Experts warn that AI-generated health guidance may be unreliable for women and people of colour due to biased and outdated training data.
  • Following the UK’s entry into a global health-regulators network and the UKHSA’s use of AI in health security, calls are growing for stricter technical filters and coordinated policy reform.