Particle.news

Download on the App Store

Generative AI Chatbots Expand Mental Health Support and Prompt Calls for Regulation

Experts urge ethical standards to ensure AI tools augment human therapists without compromising patient safety.

Mariano Vior
Image
Sociedad - Testimonios: Sofía Balmaceda y Martín Scheurman trabajan en una sala de Wolf Cowork oficinas coworking el centro Pinamar
Verano 2021. 
13-01-2021
Foto: Fernando la Orden
FernandodelaOrden FTP CLARIN Coworking503.JPG Z DelaOrden
Image

Overview

  • Harvard Business Review identified therapy and emotional support as the leading use of generative AI in 2025, with millions turning to LLMs and dedicated bots like Woebot and Wysa for round-the-clock guidance.
  • Users cite free, anonymous access and rapid responses as key advantages of AI chatbots, reporting feelings of being understood and judged less than in traditional settings.
  • Critics argue that AI cannot replicate empathy, moral judgment or contextual understanding—qualities they deem essential for safe and effective mental health care.
  • Mental Health Europe’s report highlights potential benefits such as improved access, personalized treatments and reduced administrative burdens while warning of misinterpretation, privacy risks and the need for human-centered governance.
  • A Massachusetts Institute of Technology study links intensive chatbot use to increased loneliness, emotional dependency and reduced social interaction, reinforcing demands for robust regulation.