Overview
- MeitY’s proposal would require visible disclosure of synthetically generated information, with labels covering at least 10% of visuals or appearing in the first 10% of audio.
- Significant social media intermediaries with more than 50 lakh users would have to collect user self-declarations, verify them, and deploy automated tools to detect and tag synthetic content.
- Rights advocates, including the Internet Freedom Foundation, argue the broad SGI definition could compel speech, enable general monitoring, and chill lawful satire, remixes, and filter-driven creativity.
- Industry voices flag cost and accuracy limits of detection systems and note that visible labels and metadata can be cropped, stripped, or spoofed, shifting burdens to compliant users and platforms.
- Experts warn transparency marks alone will not deter AI-enabled crimes, urging targeted legal remedies, as the draft tracks EU and Chinese transparency approaches and platforms like YouTube and Meta expand their own labelling tools.