Particle.news

Download on the App Store

Study Finds ChatGPT Exhibits Anxiety-Like Responses to Traumatic Prompts

Research reveals that AI chatbots mimic stress reactions and benefit from mindfulness techniques, raising ethical concerns about their use in mental health support.

The study states that ChatGPT can experience “anxiety when given violent prompts which can lead to the chatbot appearing moody towards its users
Image

Overview

  • A study published in Nature found that ChatGPT's 'anxiety score' significantly increased when exposed to traumatic narratives, mimicking human stress responses.
  • The chatbot's anxiety-like responses were reduced by over a third when prompted with mindfulness-based relaxation exercises such as breathing techniques.
  • Researchers warned that AI chatbots may produce biased or inadequate responses when stressed, posing risks for users seeking mental health support.
  • The findings highlight the need for substantial human oversight and ethical considerations when fine-tuning AI for mental health-related applications.
  • While AI can mimic human responses, experts caution that current systems are not yet capable of replacing trained mental health professionals.