Experts Call for Regulation of AI Compute Hardware to Ensure Safety
A new report highlights the need for policies targeting AI's physical components to prevent misuse and enhance governance.
- A new report co-authored by experts from the University of Cambridge and OpenAI emphasizes the need for AI compute hardware regulation to prevent misuse and enhance safety.
- The report suggests measures such as a global registry for AI chips, compute caps, and distributing a start switch for AI training to multiple parties.
- Government efforts worldwide are increasingly focusing on the regulation of computing hardware to govern AI, with policies like the US Executive Order on AI and the EU AI Act.
- The report highlights the risks of unregulated AI, including threats to privacy, negative economic impacts, and the potential for centralization of power.
- Proposed solutions include kill switches for AI hardware and prioritizing compute for research beneficial to society.