India Unveils AI Governance Blueprint, Moves to Require Labels on Synthetic Content
MeitY outlines a risk-based, institution-led approach grounded in existing law to rein in harms.
Overview
- The published guidelines prioritize sector-specific regulation over an immediate standalone AI law and set out six pillars spanning infrastructure, risk, accountability, policy, institutions and capacity building.
- Draft amendments to the IT Rules would mandate user declarations, platform verification, visible labels and embedded metadata for AI-generated material, with safe-harbour protections at risk for platforms that do not comply.
- The framework proposes a new architecture featuring an AI Governance Group, a Technology & Policy Expert Committee and an AI Safety Institute to coordinate oversight and technical evaluation.
- Legal experts and industry participants dispute whether AI developers fit the IT Act’s intermediary definition and who bears liability for outputs, with arguments ranging from user-led responsibility to shared obligations for hybrid platforms.
- Internal government discussions flag privacy and inference risks from officials’ use of generative AI, including concerns about reliance on foreign services.