Overview
- Researchers demonstrated that attackers can hide malicious commands in emails using invisible HTML and CSS, which Gmail’s Gemini AI then executes when generating summaries.
- Injected prompts can produce fake phishing warnings that mimic official Google alerts, tricking users into revealing credentials or calling fraudulent support numbers.
- Google reports no detected real-world abuses but has implemented interim safeguards including stripping hidden content from summaries and requiring user confirmation for risky actions.
- Despite these interim measures, the vulnerability remains exploitable and Google’s decision to mark the report as “won’t fix” has alarmed security professionals.
- Experts are calling for robust client-side filtering and post-processing checks to detect and neutralize hidden instructions before AI summaries are generated.