AI Implements New Measures to Combat Election Misinformation
Despite the introduction of stricter policies and plans to incorporate digital credentials into AI-generated images, the effectiveness of these measures remains uncertain.
- OpenAI, the artificial intelligence company, has updated its policies to combat election misinformation, forbidding users from using its tools to impersonate candidates or local governments, or for campaigns or lobbying.
- OpenAI plans to incorporate the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials into images generated by Dall-E, making it easier to identify artificially generated images.
- OpenAI's tools will direct voting questions in the United States to CanIVote.org, a reliable authority on where and how to vote in the U.S.
- Despite these measures, OpenAI faces challenges in policing its platform and enforcing these policies, with the effectiveness of these measures still unclear.
- Concerns have been raised about the potential for AI tools to increase the sophistication and volume of political misinformation, with instances of election-related lies already being generated by AI tools.