Hackers Exploit Vulnerability in AI Assistant Chats
Researchers have discovered a method allowing hackers to intercept and decrypt encrypted AI assistant conversations, posing a significant privacy risk.
- Israeli researchers identified a vulnerability in AI assistants that enables hackers to read encrypted conversations.
- The exploit involves analyzing the size and sequence of data tokens, revealing sensitive information.
- Most major AI assistants, except Google's Gemini, are susceptible to this side-channel attack.
- Cloudflare has implemented mitigation measures to protect against these attacks on its AI products.
- Experts recommend including random padding in messages to obscure token lengths and prevent data inference.