Overview
- OpenAI announced it may revise safety requirements if rival AI developers release high-risk systems without comparable safeguards.
- The updated Preparedness Framework introduces automated evaluations and risk thresholds for categorizing AI models by capability and potential harm.
- OpenAI's recent release of GPT-4.1 without a safety report has drawn criticism, raising concerns about reduced transparency and safety commitments.
- Former employees and critics warn that the company’s evolving approach could compromise safety standards to maintain competitiveness.
- CEO Sam Altman defended the changes, emphasizing the need to balance rigorous protections with user feedback and less restrictive model behavior.