Delhi, February 10: The Government of India has issued a new set of mandates requiring social media platforms to prominently label all AI-generated or "synthetic" content. Under the latest directives, any content created or altered using artificial intelligence must be embedded with permanent identifiers and metadata.

This regulatory tightening places the onus directly on intermediaries and social media giants to ensure that AI labels remain visible. The government has explicitly prohibited the removal or suppression of these labels once they have been applied. By enforcing these "traceability" features, authorities aim to help users distinguish between authentic and computer-generated information in real-time. India AI Impact Summit 2026: UN Secretary-General António Guterres Hails India’s Leadership Ahead of Key Event.

New AI Rules: Automated Detection and Moderation

The government order further requires platforms to deploy sophisticated automated tools to monitor their ecosystems. These tools must be designed to detect and prevent the circulation of AI content that is deemed illegal, sexually exploitative, or intentionally deceptive.

This proactive approach shifts the responsibility from reactive moderation to preventive technology. Platforms are now expected to identify high-risk synthetic media before it reaches a mass audience, particularly in cases involving non-consensual deepfakes or content that could incite public disorder. 'Ethical Use of AI Is Non-Negotiable': PM Narendra Modi Meets CEOs to Push India’s AI Mission Before India AI Impact Summit 2026.

Persistence of AI Metadata

A key highlight of the new rules is the "persistence" of identifiers. Platforms are now legally barred from allowing users to strip away metadata or watermarks that indicate a file was AI-generated. This ensures that even if a piece of content is shared across multiple platforms, its status as synthetic remains attached to the file.

Experts suggest this addresses a major loophole where users would download an AI-labeled video and re-upload it elsewhere to bypass detection. Under the new mandate, the digital signature must stay embedded throughout the content’s lifecycle.

Quarterly User Awareness and Penalties

To ensure users are aware of the legal consequences of misusing AI, the government has mandated a regular warning system. Social media companies must now notify their users at least once every three months about the penalties associated with violating digital rules.

These warnings must specifically highlight the risks of using AI for harmful purposes, such as impersonation or fraud. The government intends for these recurring notifications to serve as a deterrent against the malicious creation of deepfakes.

These directives follow a series of consultations between the Ministry of Electronics and Information Technology (MeitY) and various tech stakeholders. The surge in highly realistic AI-generated images and videos during recent global events has accelerated the need for a formal legal framework.

While the government encourages AI innovation, it has maintained a firm stance that safety and trust must be the priority. Platforms found in non-compliance with these new labeling and warning mandates may face strict penalties under the Information Technology Act.

Rating:4

TruLY Score 4 – Reliable | On a Trust Scale of 0-5 this article has scored 4 on LatestLY. The information comes from reputable news agencies like (PTI ). While not an official source, it meets professional journalism standards and can be confidently shared with your friends and family, though some updates may follow.

(The above story first appeared on LatestLY on Feb 10, 2026 06:02 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).