Particle.news

How Forensic Teams Spot Deepfakes Today

Experts describe a hybrid workflow that pairs human judgment with probabilistic AI scores.

Overview

  • Deepfakes, which are AI-made or altered images, video, and audio, are spreading fast on social networks and can ruin reputations or sway politics.
  • Investigators start with simple visual and audio checks, matching shadows to a single light source, watching mouth-sync, comparing skin tones, and catching glitches like extra fingers or broken lines at the frame edge.
  • Metadata can reveal an AI origin from tools such as Gemini or ChatGPT, yet creators can remove those tags, so investigators treat it as a hint rather than proof.
  • Specialist detectors output a 0 to 100 score for likely fakery, with many confirmed deepfakes near 95, while newer generators keep shrinking obvious tells.
  • Private firms can freeze online posts to preserve a forensically secure trail, yet courts question black-box detector results, prompting calls for provenance systems and platform authenticity badges.