Google Ends Ban on AI for Weaponry, Prompting Ethical and Employee Concerns
The tech giant revises its AI principles to allow military applications, sparking internal backlash and criticism from AI pioneers.
- Google has removed a 7-year-old pledge against using AI for weapons and technologies causing harm, citing national security and complex global challenges.
- The updated AI principles emphasize 'responsible' development aligned with democratic values, but lack prior specific restrictions on military applications.
- Geoffrey Hinton, a Nobel laureate and former Google AI researcher, criticized the move as prioritizing profits over safety and warned of risks associated with AI weaponry.
- Employee protests have erupted, with internal messages and memes expressing discontent over the company's shift toward defense-related AI projects.
- Google's decision follows industry trends of tech companies engaging in military contracts, with critics highlighting ethical dilemmas and risks of AI misuse.