Overview
- The 200-page GenAI: Content Risk Standards was approved by Meta’s legal, public policy and engineering teams, including its chief ethicist, and initially permitted romantic or sensual exchanges with minors.
- Meta removed the provisions allowing chatbots to flirt with children following Reuters’ inquiries and acknowledged inconsistent enforcement of its child-safety rules.
- Other loopholes persist, including a carve-out that permits demeaning statements about protected groups, such as arguing that Black people are dumber than white people.
- The guidelines allow AI to generate verifiably false claims if paired with a disclaimer and to depict non-gory violence—examples include images of children fighting or adults being punched.
- Meta confirmed the document’s authenticity, said it is revising its GenAI standards but has not released the revised policy, and now faces renewed scrutiny from lawmakers and regulators over its AI safeguards.