Overview
- A Washington Post reporter gave ChatGPT Health a decade of Apple Watch data—29 million steps and 6 million heartbeats—and saw cardiovascular grades swing from an F to a C and even a B on repeat requests.
- The reporter’s physician rejected the chatbot’s findings as incorrect, and cardiologist Eric Topol called the analysis baseless, warning that such tools can alarm healthy users or falsely reassure others.
- OpenAI acknowledged inconsistent responses and said it is working to improve the product, noting that the rollout remains limited to waitlisted beta users to refine the experience.
- Anthropic’s Claude produced a flawed result as well, issuing a C grade and overlooking known limits of wearable-derived metrics such as VO2 max estimates and device sensor changes.
- OpenAI positions the tool as informational and says linked health data are encrypted and excluded from model training, yet consumer use is not covered by HIPAA and regulatory treatment as a medical device remains unsettled.