India proposes strict rules to label AI content citing growing risks

In a significant move to address the growing risks of artificial intelligence (AI) misuse, the Indian government has proposed stringent regulations requiring AI and social media companies to clearly label AI-generated content. Announced on Wednesday, October 22, 2025, the draft rules aim to curb the spread of deepfakes and misinformation, particularly in a country with nearly 1 billion internet users and diverse ethnic and religious communities where fake news can incite deadly conflicts. The proposal follows similar initiatives by the European Union and China. The new regulations mandate that AI-generated content be labeled with markers covering at least 10% of the visual display or the initial 10% of an audio clip’s duration. Social media platforms must also obtain user declarations confirming whether uploaded content is AI-generated and implement technical measures to ensure transparency and accountability. The Indian IT Ministry emphasized that the rules will ‘ensure visible labelling, metadata traceability, and transparency for all public-facing AI-generated media.’ Public and industry feedback on the proposal is invited until November 6. The government expressed concerns about the increasing misuse of generative AI tools, which can spread misinformation, manipulate elections, or impersonate individuals. High-profile lawsuits related to deepfakes, including those involving Bollywood stars Abhishek Bachchan and Aishwarya Rai Bachchan, are currently being heard in Indian courts. The proposed regulations are among the first global attempts to set quantifiable visibility standards for AI-generated content. If implemented, AI platforms in India will need to develop automated labeling systems to identify and mark AI-generated content at the point of creation. India is rapidly becoming a major market for AI firms, with OpenAI CEO Sam Altman noting in February that India is its second-largest market by user base, which has tripled in the past year.