Overview
- Presented at Gartner’s Security & Risk Management Summit 2025, a survey of 302 security leaders found 62% experienced deepfake or other AI-driven attacks in the past 12 months across North America, EMEA and Asia/Pacific.
- Deepfake audio calls were the most common vector at 44%, with 6% causing business interruption, financial loss or IP loss; reported loss rates dropped to 2% where audio-screening services were used.
- Video deepfakes were reported by 36% of organizations, and 5% of those incidents caused serious problems, with tactics including a brief executive video on WhatsApp before shifting to text to continue the fraud.
- Attacks on AI applications, including prompt injection, were reported by 32% of respondents, with researchers demonstrating exploitation paths in systems such as Gemini, Claude and ChatGPT.
- Experts advise integrating deepfake detection into collaboration platforms like Microsoft Teams or Zoom, running simulation-based staff training, and enforcing application-level approvals with phishing-resistant MFA, as separate reporting notes detector accuracy has slid toward about 65% in 2025.