Particle.news
Download on the App Store

OpenAI Rolls Out ChatGPT Health With Opt-In Record Linking as Experts Flag Privacy and Safety Risks

In the U.S., the feature sits outside HIPAA, leaving users to rely on company policies.

Overview

  • ChatGPT Health is appearing for early-access users as a separate, encrypted space where health chats are excluded from model training and connections to medical records or wellness apps are strictly opt in.
  • OpenAI says the tool is meant to help interpret results and prepare for appointments rather than diagnose conditions, with guidance shaped by input from more than 260 physicians.
  • A Northwestern AI-in-medicine expert warns data shared in ChatGPT Health is not covered by HIPAA and could be subject to subpoenas or other legal processes.
  • Researchers note there is no mandatory safety testing or published evidence on ChatGPT Health’s accuracy, and a senior AIIMS clinician cautions against self-diagnosis after a patient reportedly suffered bleeding following chatbot-informed self-medication.
  • Competition is intensifying as Anthropic launches a HIPAA-oriented clinician product, while some experts advocate on‑device AI as a privacy-preserving alternative that could become practical within the next one to two years.