Overview
- ChatGPT Health is appearing for early-access users as a separate, encrypted space where health chats are excluded from model training and connections to medical records or wellness apps are strictly opt in.
- OpenAI says the tool is meant to help interpret results and prepare for appointments rather than diagnose conditions, with guidance shaped by input from more than 260 physicians.
- A Northwestern AI-in-medicine expert warns data shared in ChatGPT Health is not covered by HIPAA and could be subject to subpoenas or other legal processes.
- Researchers note there is no mandatory safety testing or published evidence on ChatGPT Health’s accuracy, and a senior AIIMS clinician cautions against self-diagnosis after a patient reportedly suffered bleeding following chatbot-informed self-medication.
- Competition is intensifying as Anthropic launches a HIPAA-oriented clinician product, while some experts advocate on‑device AI as a privacy-preserving alternative that could become practical within the next one to two years.