Particle.news
Download on the App Store

YouTube Defends Biometric Deepfake Tool, Promises Clearer Language as Rollout Widens

Experts worry the sign-up’s tie to Google’s privacy policy could allow biometric uploads to feed AI training.

Overview

  • Creators must submit a government ID and a biometric video to enable likeness detection, which scans YouTube for AI-altered or generated uses of their face and lets them request removals.
  • YouTube says biometric data gathered for the feature has never been used to train Google’s generative AI models and is limited to verification and detection, and it will clarify in-product wording without changing the underlying policy.
  • The company plans to extend access to more than 3 million YouTube Partner Program creators by the end of January, according to head of creator product Amjad Hanif.
  • Rights-management firms Vermillio and Loti advise clients not to enroll, citing the privacy policy’s broad training language and warning that high-quality biometric samples linked to a name could enable more convincing synthetic content.
  • Creators currently cannot monetize unauthorized uses of their likeness on YouTube, and while the platform says it is exploring a model similar to Content ID, it previously allowed third-party AI firms to train on creator videos with millions opting in without compensation.