Particle.news

Download on the App Store

ChatGPT Updates Health Guidance After Toxic Salt Advice Hospitalises Patient

OpenAI says it has revised its health advice to warn against toxic substitutes, removing sodium bromide recommendations following a bromism report.

Image
Image
Image
Image

Overview

  • A peer-reviewed case in the Annals of Internal Medicine confirms a 60-year-old man developed bromism after substituting sodium bromide for table salt based on ChatGPT guidance.
  • Clinicians replicated the user’s query and found the earlier model recommended sodium bromide without any health warnings, leading to the patient’s three-week hospitalisation for paranoia, hallucinations and related symptoms.
  • When Metro re-queried the updated chatbot, its response now includes an explicit safety note against toxic alternatives and omits bromide from its list of salt substitutes.
  • OpenAI has pledged further safeguards for medical prompts and strengthened disclaimers, but the lack of the original chat log highlights gaps in auditing past AI outputs.
  • Health experts warn that decontextualised AI-generated medical advice can prompt dangerous self-experimentation, underscoring calls for clinician oversight and clearer AI safety protocols.