Overview
- LayerX reported a cross‑site request forgery technique that plants hidden instructions in ChatGPT’s persistent memory, persisting across devices once a logged‑in user clicks a malicious link.
- In LayerX’s comparative tests of 103 in‑the‑wild attacks, Atlas blocked only 5.8% of malicious pages, which the firm says leaves users far more exposed to phishing than Chrome or Edge.
- SPLX demonstrated a cloaking method where sites serve different content to AI crawlers via user‑agent detection, enabling manipulated data to steer agent decisions in Atlas and similar tools.
- NeuralTrust found Atlas’s omnibox can treat a disguised URL as trusted user intent, a prompt‑injection path that researchers warn could trigger harmful actions during authenticated sessions.
- OpenAI says Atlas was extensively red‑teamed and includes safety constraints for agent actions, while the macOS release proceeds with Windows and mobile versions to follow and users reporting token‑storage concerns on some installs.