OpenAI Introduces New Safety Plan to Mitigate AI Risks
Board of Directors Given Final Say in Decision-Making Process
- OpenAI has introduced a new safety plan, the 'Preparedness Framework', to prevent worst-case scenarios from the AI technology it is developing.
- The Preparedness Framework includes a dedicated team that will monitor and mitigate potential risks from advanced AI models.
- Under the new framework, the board of directors has the final say and the right to reverse decisions made by the OpenAI leadership team.
- AI models are evaluated for risks in areas such as cybersecurity, persuasion, model autonomy, and chemical, biological, radiological, and nuclear threats.
- The framework is expected to be updated regularly based on feedback and is part of OpenAI's efforts to self-regulate and ensure the safe development and deployment of AI tools.