Overview
- Meta’s AI will extend beyond “low-risk” releases to include AI safety checks, youth risk evaluations, misinformation detection and violent content moderation.
- Product teams will submit questionnaires and receive near-instant AI-generated risk scores and mitigation recommendations to accelerate feature rollouts.
- The company aims to phase out 90% of human-led risk assessments globally by 2025 while EU operations maintain human oversight under the Digital Services Act.
- In April, Meta ended its human fact-checking program in favor of crowd-sourced Community Notes and algorithmic moderation tools.
- CEO Mark Zuckerberg forecasts that within 12 to 18 months AI will write the majority of Meta’s own AI code, underscoring its reliance on automation.