Particle.news

Download on the App Store

Gartner: 62% of Organizations Faced Deepfake Attacks This Year

A new survey finds deepfake fraud has become routine, prompting calls for process-level approvals and targeted training over standalone detection tools.

Overview

  • Deepfake incidents now commonly pair synthetic audio or video with social engineering to impersonate executives and push employees into wiring funds or granting access.
  • Gartner also reports 32% of organizations experienced attacks on AI applications in the past year, including prompt injection that attempts to steer model outputs.
  • Vendors are beginning to embed deepfake detection into collaboration platforms like Microsoft Teams and Zoom, though large-scale effectiveness remains unproven.
  • Security leaders are urged to require application-level approvals with phishing-resistant MFA so no single call or video can authorize high-risk actions.
  • Some firms are running awareness simulations using executive deepfakes, while experts stress that detection signals are probabilistic and should be layered with stronger processes.