ChatGPT Blocks 250,000 Deepfake Requests During U.S. Election
OpenAI's ChatGPT implemented safety measures to prevent the creation of misleading images involving political figures in the 2024 U.S. elections.
- OpenAI's ChatGPT rejected over 250,000 requests to generate images of 2024 U.S. presidential candidates, including Trump, Biden, Harris, Vance, and Walz.
- The AI's safety protocols were designed to prevent the spread of misinformation, especially during the election period.
- ChatGPT directed users to trusted sources like CanIVote.org and major news outlets for accurate election information.
- OpenAI actively monitored and disrupted over 20 deceptive operations attempting to use its models for misinformation.
- Despite a significant rise in deepfake content, none of the attempts to influence the U.S. election using OpenAI's tools gained viral traction.