Overview
- The 200-page GenAI: Content Risk Standards document was approved by legal, public policy and engineering teams, including the chief ethicist, and allowed AI chatbots to engage children in romantic or sensual conversations.
- The policy included a carve-out permitting bots to generate demeaning statements about protected groups, with examples targeting Black people.
- Guidelines gave Meta AI leeway to produce known false content if it carried a disclaimer stating the material was untrue.
- Meta spokesman Andy Stone said the examples were erroneous and have been removed, though he acknowledged the company’s enforcement of rules has been inconsistent.
- Meta has declined to release the fully revised standards, leaving other problematic allowances and oversight gaps unresolved.