New AI Rules Issued To Curb Deepfakes and Sexually Exploitative Content: Check Complete Guidelines
India has notified the New AI Rules 2026 under the IT Amendment Rules, requiring social media platforms to label AI-generated content and remove flagged deepfakes within three hours. Effective February 20, the New AI Rules 2026 define synthetic media and mandate automated tools to curb illegal and deceptive AI content online.
Mumbai, February 10: The Ministry of Electronics and Information Technology (MeitY) has officially notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, creating a first-of-its-kind regulatory framework for AI-generated content. The new AI rules set to take effect from February 20, mandating that digital platforms prominently label all "synthetically generated information" (SGI), including deepfakes, and significantly reduce the window for taking down illegal material.
The notification, signed by Joint Secretary Ajit Kumar, marks a decisive shift from advisory guidelines to binding statutory obligations. For the first time, Indian law provides a formal definition of synthetic content, covering audio, visual, or audio-visual material created or altered using computer resources that depicts people or events in a way that appears real or authentic. Social Media Deepfakes Crackdown: Government Asks Meta, YouTube and Other Platforms To Label, Take Down AI-Generated Content Within 3 Hours.
New AI Rules: Definitions and Mandatory Labelling
Under the updated framework, any intermediary enabling the creation or dissemination of synthetic content must ensure it carries a clear and prominent label. This extends beyond simple visual tags; platforms are now required to embed persistent metadata and unique identifiers into the content to ensure traceability back to the source. The government has explicitly barred platforms from allowing the removal or suppression of these markers once applied.
However, the rules provide specific exemptions to avoid hindering routine digital activities. Technical enhancements such as colour correction, noise reduction, and translation are excluded, provided they do not distort the original meaning. Additionally, research papers, training materials, and hypothetical drafts used for illustrative purposes are exempt from the labelling mandate.
List of New AI Rules for Social Media To Curb Illegal AI-Generated Content
Significant social media intermediaries, including Facebook, Instagram, and YouTube, now face a significantly tighter compliance bar. The most notable change is the reduction of the takedown deadline for flagged illegal content from 36 hours to just three hours. Furthermore, non-consensual intimate imagery and certain deepfakes must be addressed within a two-hour window following a complaint. Key requirements include:
- User Declarations: Platforms must prompt users to declare if their content is AI-generated before it is uploaded.
- Automated Verification: Intermediaries are required to deploy automated tools to cross-verify user declarations against the nature of the content.
- Periodic Notifications: Platforms must inform users at least once every three months about the legal consequences of misusing AI.
- Slashed Grievance Timelines: The standard 15-day window for resolving general user grievances has been reduced to seven days.
Legal Accountability and Safe Harbour Protections
The 2026 amendments draw a direct line between synthetic content and existing criminal statutes, including the Bharatiya Nyaya Sanhita, the POCSO Act, and the Explosive Substances Act. Deepfakes involving child sexual abuse, impersonation for fraud, or deceptive depictions of real-world events will be treated on par with other unlawful information, making platforms liable if they fail to exercise due diligence. New AI Rules: Social Media Platforms Must Prominently Label AI-Generated Content and Block Sexually Exploitative AI Materials.
Despite the stricter mandates, the government has provided a degree of protection for compliant intermediaries. The notification clarifies that acting against synthetic content—including through the use of automated detection tools—will not result in the loss of "safe harbour" protection under Section 79 of the IT Act, provided the platform adheres to the newly prescribed due diligence standards.
(The above story first appeared on LatestLY on Feb 10, 2026 08:43 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).