India has implemented sweeping amendments to its digital governance framework, mandating that social media platforms remove unlawful content within a dramatically shortened three-hour window—a significant reduction from the previous 36-hour requirement. The new regulations, effective February 20th, apply to all major technology entities including Meta, YouTube, and X, while also establishing groundbreaking provisions for artificial intelligence-generated media.
The government’s Electronics and Information Technology Ministry did not disclose specific rationale for the accelerated takedown timeline. However, digital rights organizations immediately raised concerns about the potential for automated over-censorship in the world’s most populous democracy, home to over one billion internet users.
This regulatory shift occurs against a backdrop of increasing governmental oversight of digital content. Existing Information Technology rules have previously enabled authorities to remove material classified as threatening national security or public order. Transparency reports indicate that government requests resulted in the blocking of more than 28,000 web addresses throughout 2024.
The amendments introduce pioneering definitions for AI-generated content, specifically targeting synthetic media that appears authentic, such as deepfakes. The regulations exempt standard editing practices, accessibility features, and legitimate educational content. Platforms must now implement clear labeling systems for AI-generated material and incorporate permanent digital markers to enhance traceability. Once applied, these labels cannot be removed.
Additionally, companies must deploy automated detection systems to identify prohibited AI content categories including non-consensual intimate imagery, fraudulent documentation, child exploitation material, explosives-related content, and impersonation attempts.
The Internet Freedom Foundation condemned the compressed timeline, warning it transforms platforms into “rapid fire censors” that prioritize automated removal over human judgment. Digital Futures Lab researcher Anushka Jain acknowledged the potential benefits of labeling requirements for transparency but cautioned that the extreme deadline would inevitably push companies toward full automation with reduced oversight.
Technology analyst Prasanto K Roy characterized the framework as “perhaps the most extreme takedown regime in any democracy,” noting that compliance would be “nearly impossible” without extensive automation and minimal human review. Roy further observed that while AI labeling intentions were positive, reliable and tamper-proof technologies remain under development.
Major technology firms have remained largely silent regarding the amendments. Meta declined commentary, while YouTube’s parent company Google and X have not issued public statements. The BBC has contacted the Indian government for response to the expressed concerns.









