Tech giants form AI industry group to promote safety standards and collaborate with policymakers amid growing calls for regulation
- Google, Microsoft, OpenAI and Anthropic launched the Frontier Model Forum to ensure safe AI development.
- The forum aims to advance AI safety research, identify best practices, and collaborate with academics, civil society and governments.
- Announcement follows AI leaders warning Congress about potential risks like bioweapons and lawmakers exploring regulation.
- Forum wants to shape policies and standards as scrutiny grows in US and EU over unchecked AI advances.
- Founding companies previously made voluntary commitments on AI safety to White House.
































