Particle.news

Download on the App Store

University of Chicago Researchers Develop Nightshade to Disrupt AI Image Generators, Protecting Artist's Works from Unauthorized Use

The tool, named Nightshade, subtly alters images rendering them harmful to AI generative models, causing significant errors in AI's perception of the images and ultimately rendering the AI model useless.

  • Nightshade, a new tool developed by researchers at the University of Chicago, subtly alters images making them harmful to AI generative models thus protecting artists' works from unauthorized scraping by AI programs.
  • The tool functions by inserting data into an image's pixels that disturbs AI image processors that scour the internet for pictures to train on, causing significant perception errors and functionality issues for the AI model.
  • Nightshade aims to disincentivize unauthorized data training and instead encourage the use of legitimate, licensed content. It has little to no impact on AI models that obey opt-outs and do not engage in data scraping.
  • Nightshade's predecessor, Glaze, was also designed to protect artists by disrupting AI's ability to mimic style. While Glaze alters how AI interprets an image without changing human perception, Nightshade corrupts the AI model further by causing malfunctions.
  • While the tool holds promise for artists' copyright protection, it comes with challenges such as its vulnerability to hacking. The use of Nightshade does not offer forward protection for the vast amount of content already posted online.
Hero image