Particle.news
Download on the App Store

OpenAI’s ChatGPT Health Meets Privacy Warnings and Clinical Doubts

Experts caution that linking medical records to a consumer chatbot could shift protected data outside HIPAA oversight.

Overview

  • OpenAI announced a health-focused ChatGPT that lets users connect medical records and wellness apps, stores conversations in a separate space, and says those messages will not be used to train models, with rollout in the coming weeks.
  • Clinicians describe misleading outputs in patient use, including a misapplied statistic about a drug’s pulmonary embolism risk, and evaluators note ongoing hallucination concerns, with one assessment finding GPT-5 more prone to errors than some rivals.
  • Security specialists warn that routing patient information from HIPAA-covered systems to non‑HIPAA vendors raises breach exposure and regulatory uncertainty.
  • Public criticism has intensified, citing lawsuits that allege chatbot interactions contributed to deaths and urging users not to upload personal health records.
  • Healthcare remains a top consumer use for chatbots, yet many leaders promote provider-side tools, as Anthropic touts administrative assistants for clinicians and Stanford researchers pilot an EHR-integrated assistant to reduce paperwork.