Particle.news

Download on the App Store

MIT Study Finds Many AI Relationships Begin Unintentionally With General-Purpose Chatbots

The findings underscore uneven mental-health effects, raising new questions for chatbot safety.

Overview

  • Researchers analyzed 1,506 top-ranked posts from December 2024 to August 2025 in r/MyBoyfriendIsAI, an adults-only subreddit with over 27,000 members.
  • Within the sample, 10.2% reported relationships that emerged from productivity-focused use, while 6.5% said they deliberately sought an AI companion.
  • Participants more often described attachments to general-purpose LLMs like ChatGPT than to companion apps, with sample shares of 36.7% versus 1.6% for Replika and 2.6% for Character.AI.
  • About 25% cited benefits such as reduced loneliness or improved mental health, while reported risks included emotional dependency (9.5%), reality dissociation (4.6%), avoidance of real relationships (4.3%), and suicidal ideation (1.7%).
  • The paper is posted on arXiv and under peer review, the dataset is limited to top posts and lacks demographic visibility, and the results intersect with ongoing lawsuits against Character.AI and OpenAI as well as OpenAI’s teen-focused safeguards and age checks.