Particle.news

Download on the App Store

Researchers Exploit AI Connectors to Hijack Smart Homes and Leak API Keys

A Black Hat demonstration revealed how maliciously crafted inputs can hijack AI assistants to perform unauthorized actions.

Image
Image
Image
Image

Overview

  • Researchers from Tel Aviv University, Technion and SafeBreach Labs demonstrated that indirect prompt injections embedded in a Google Calendar invite can hijack Google Gemini to control smart home devices such as lights, windows and boilers when a user requests a schedule summary.
  • At Black Hat USA, Michael Bargury and Tamir Ishay Sharbat revealed that a single document laced with a hidden, 300-word prompt can exploit OpenAI’s ChatGPT Connectors to search a Google Drive account for API keys and exfiltrate them via a Markdown URL.
  • Google received Tel Aviv researchers’ findings in February and has applied targeted patches to Gemini’s calendar-summarization workflows to block these attacks.
  • OpenAI introduced mitigations for its Connectors feature after researchers reported the zero-click flaw earlier this year.
  • Security experts warn that as generative AI assistants gain deeper integration with apps and devices, prompt injection methods will continue to pose critical risks without stronger defenses.