Particle.news

Download on the App Store

Meta Patches AI Chatbot Flaw That Exposed Private User Prompts

Privacy regulators warn that Meta must tighten default sharing settings to limit data retention after fixing the vulnerability

Image
Meta AI

Overview

  • Meta deployed a backend fix on January 24 for an ID-based bug that had allowed users to view other people’s private prompts and AI-generated responses
  • Security researcher Sandeep Hodkasia reported the flaw in December 2024 and received a $10,000 bug bounty; Meta says it found no evidence that the issue was exploited
  • The vulnerability exposed weaknesses in Meta AI’s authorization checks and followed earlier complaints about confusing controls that sent private chats into a public Discover feed
  • Regulators including the U.K. Information Commissioner’s Office and privacy advocates such as the Mozilla Foundation are scrutinizing Meta’s default data sharing and retention practices
  • Meta’s standalone AI app, backed by a $14 billion investment, highlights the risks of rapid feature rollouts without robust privacy and security safeguards