Meta’s Oversight Board Investigates Handling of AI-Generated Explicit Images
The board reviews cases in the U.S. and India, highlighting inconsistencies in content moderation.
- Meta’s Oversight Board is probing two cases involving AI-generated explicit images of public figures on Instagram and Facebook, focusing on the platforms' response in the U.S. and India.
- In both incidents, the images were initially not removed by Meta’s automated systems, leading to further scrutiny and eventual takedown after appeals to the Oversight Board.
- The board aims to assess the effectiveness of Meta’s policies and enforcement practices globally, with a focus on protecting women from non-consensual deepfake pornography.
- Public comments are sought by the board until April 30 to gather insights on the impact of deepfake pornography and the efficiency of Meta’s detection systems.
- Experts criticize the platforms for placing the burden of reporting and proving non-consensual imagery on users, highlighting the need for more proactive measures in content moderation.