Researchers Develop AI Worm Capable of Stealing Data and Breaching Security
The newly created 'Morris II' worm targets generative AI systems, including ChatGPT and Gemini, to extract sensitive information and propagate itself.
- Cornell Tech researchers, along with Technion-Israel Institute of Technology and Intuit, have developed an AI worm named 'Morris II' that can steal sensitive data and breach security measures of generative AI systems.
- The worm uses 'adversarial self-replicating prompts' to infect AI-powered email assistants, extracting confidential information such as names, credit card numbers, and social security numbers.
- Researchers conducted their experiments in a controlled environment, demonstrating the worm's ability to spread from one system to another and send out spam emails.
- OpenAI and Google have been notified of the findings, with OpenAI working to make their systems more resilient against such attacks.
- The creation of 'Morris II' raises concerns about the potential for AI-powered malware to spread in the wild, highlighting the need for improved cybersecurity measures in generative AI systems.