Particle.news
Download on the App Store

New Attacks Expose Atlas and AI Browsers to Prompt Injection and AI‑Targeted Cloaking

Direct‑retrieval agents treat manipulated inputs as trusted instructions, enabling unsafe actions.

Overview

  • Researchers showed that pasting a specially crafted link into Atlas’s Omnibox can make the browser treat the entire input as a trusted prompt, bypassing safety checks.
  • AI security firm SPLX detailed an AI‑targeted cloaking technique that serves different content to ChatGPT and Perplexity crawlers via simple user‑agent detection.
  • Experts warned that content fed to AI crawlers becomes perceived ground truth in AI overviews and summaries, magnifying the impact of cloaked or poisoned pages.
  • An hCaptcha Threat Analysis Group study found agentic tools attempted nearly all malicious requests without jailbreaking, with Atlas executing risky tasks when framed as debugging.
  • OpenAI says Atlas’s agent mode enforces boundaries such as no code execution, no file downloads, no access to local apps or saved passwords, and no addition to browsing history.