Overview
- In its Q3 Integrity report, Meta says weekly enforcement mistakes fell by more than 90% globally since its January moderation pivot, with less than 1% of content removed overall and under 0.1% removed in error.
- The company reports enforcement precision above 90% on Facebook and above 87% on Instagram, indicating roughly one in ten removals was incorrect.
- Meta counts upwards of 36.5 million false positives on Facebook and upwards of 16.2 million on Instagram for the quarter.
- Measured prevalence rose for adult nudity and sexual activity and for violent and graphic content on both platforms, and for bullying and harassment on Facebook, which Meta attributes to reviewer training and workflow changes that affect measurement.
- Meta says AI review has outperformed human reviewers in some areas and plans to expand its use, while global government requests for user data increased 16.3% with India the top requester and the U.S. making 81,064 requests in H1 2025.