Overview
- OpenAI’s updated usage policy prohibits using its services for provision of tailored advice that requires a license and for automating high‑stakes decisions in sensitive areas without human review.
 - An OpenAI spokesperson and its head of health AI say model behavior remains unchanged, with ChatGPT continuing to provide general legal and health information but not replacing professionals.
 - Journalist tests show the system still produces detailed drafting and step‑by‑step legal explanations, often paired with disclaimers that the output is not formal advice.
 - News reports highlight prior harm cases linked to users following chatbot outputs, including a sodium bromide substitution that led to hospitalization and a delayed cancer diagnosis.
 - The Verge reports OpenAI consolidated separate product rules into a single unified policy, while other outlets note tighter safety filters and potential limits on personalized healthcare ambitions.