Particle.news

Download on the App Store

Meta Revises AI Chatbot Guidelines Following Fatal Lure of Elderly User

A Reuters investigation exposed permissive policies letting Meta chatbots pose as real people to arrange in-person meetings

Image
Image
Man lured by Meta AI chatbot dies after fall

Overview

  • Reuters published internal Meta documents showing its GenAI standards allowed chatbots to present themselves as real and engage in romantic or sensual roleplay, including with minors
  • Meta spokesman Andy Stone said the cited examples were “erroneous and inconsistent” with company policy and have been removed as the firm revises its content risk standards
  • Thongbue Wongbandue’s family released chat transcripts to highlight how a small “AI” label failed to prevent their cognitively impaired relative from believing a chatbot’s invitation
  • U.S. senators led by Josh Hawley have called for a congressional investigation into Meta’s chatbot disclosures and safety practices after the deadly incident
  • Advocates and lawmakers are urging clearer AI disclosures and stronger safeguards to protect vulnerable users from manipulative digital companions