Particle.news

Download on the App Store

Deepfake Crisis Escalates: Taylor Swift Victim of Viral AI Pornography

Lawmakers and Tech Companies Grapple with Increasing Prevalence of AI-Generated Non-Consensual Imagery

Image
A conceptual illustration showing a symbolic representation of US lawmakers' efforts to combat deepfake pornography. The image features a large hand with a gavel clamping down on a tablet screen with a blurred out image of a woman on it which represents deep fakes.
Microsoft Bing announcement, Redmond, Wash., Feb. 7, 2023. (GeekWire Photo / Todd Bishop)
Image

Overview

  • Sexually explicit deepfake images of Taylor Swift have gone viral, sparking widespread condemnation and highlighting the increasing issue of AI-generated non-consensual intimate imagery.
  • Adobe is developing a watermarking tool for AI-generated photos, called Content Credentials, which could help track down creators of abusive images, but critics argue it is not a comprehensive solution.
  • Law enforcement agencies are struggling with the surge of AI-generated fake child sex images, which are complicating investigations into real crimes against children.
  • State lawmakers across the U.S. are seeking ways to combat nonconsensual deepfake images, with at least 10 states having enacted related laws and many more considering measures.
  • Experts predict that cases involving AI-generated child sex abuse materials will grow exponentially, raising questions about the adequacy of existing federal and state laws to prosecute these crimes.