Mumbai, February 11: The Indian government has officially notified sweeping amendments to the Information Technology Rules, 2021, creating a formal regulatory framework for artificial intelligence and deepfakes. Notified by the Ministry of Electronics and Information Technology (MeitY) on February 10, the new rules introduce the legal category of “synthetically generated information” and will come into effect on February 20.
The primary objective of this intervention is to curb the rising risks of impersonation and digitally supercharged misinformation. By introducing specific platform obligations for labelling synthetic media and slashing takedown timelines, the government aims to hold digital intermediaries more accountable for the content they host. New AI Rules: Social Media Platforms Must Prominently Label AI-Generated Content and Block Sexually Exploitative AI Materials.
AI-Generated Images Leading to Real-Life Misunderstanding
Under the amended rules, “synthetically generated information” refers to any audio, visual, or audio-visual content created or altered using computer tools that appears authentic. This technical definition is designed to capture deepfake videos, voice cloning, and AI-generated imagery that could be mistaken for actual individuals or real-world events.
However, the regulations include significant carve-outs for routine digital activities. "Good faith" edits—such as colour correction, noise reduction, translation, or improving accessibility—are exempt from these requirements. This distinction ensures that everyday smartphone photography and academic research are not penalised provided they do not result in false electronic records.
New AI Rules: Mandatory Labelling and Metadata Standards
Intermediaries that facilitate the creation or sharing of AI content must now ensure that such media is “clearly and prominently” labelled. The government has moved away from a previous proposal that required watermarks to cover 10% of a frame, opting instead for a principle-based standard that allows companies more flexibility in design.
In addition to visible labels, platforms must embed permanent metadata or provenance markers into synthetic files. These digital fingerprints help trace the origin of a piece of media even after it is downloaded and re-shared. Crucially, the rules prohibit platforms from allowing users to remove or suppress these disclosures once they have been applied.
The Three-Hour Compliance Window
One of the most significant changes is the dramatic compression of enforcement timelines. For high-risk violations—including non-consensual intimate deepfakes, deceptive impersonation, and child sexual abuse material—platforms are now required to act within three hours of being notified. This is a sharp reduction from the previous 36-hour window allowed under the 2021 rules.
Additionally, internal grievance redressal timelines have been halved. Platforms must now acknowledge user complaints within two hours and resolve them within seven days, down from the earlier 15-day limit. These changes are intended to prevent harmful content from going viral, though critics warn that such tight windows may lead to automated over-removals without adequate human review.
Increased Liability for Social Media Giants
Significant Social Media Intermediaries (SSMIs), such as Facebook, Instagram, and YouTube, face even stricter burdens. These platforms must now require users to declare whether content is AI-generated before hitting upload. They are also legally obligated to deploy automated technical tools to verify these declarations rather than relying solely on user honesty. New AI Rules Issued To Curb Deepfakes and Sexually Exploitative Content: Check Complete Guidelines.
Failure to comply with these due diligence requirements could result in the loss of "safe harbour" protections. Without this legal immunity, platforms could be held liable for unlawful user-generated content as if they were the primary publishers. Users also face heightened accountability, as platforms are now required to warn them every three months about the legal consequences of misusing AI tools.
(The above story first appeared on LatestLY on Feb 11, 2026 10:40 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).













Quickly


