Business News | Government Makes It Mandatory to Label AI-generated Content to Counter Deepfake

Get latest articles and stories on Business at LatestLY. The Union Government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which make it mandatory to label AI-generated content.

Representative Image (Photo/Reuters)

New Delhi [India], February 10 (ANI): The Union Government has notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which make it mandatory to label AI-generated content.

Intermediaries offering tools that enable the creation or dissemination of "synthetic content" must ensure such material carries a clear and prominent label. Where technically feasible, platforms are also required to embed permanent metadata or provenance identifiers to trace the origin of such content, the Ministry of Electronics and Information Technology (MeitY) said in a notification.

Also Read | Meat Ban in Mysuru on Mahashivaratri 2026: MCC Orders Closure of Slaughterhouses and Meat Stalls on February 15, Check Details.

The amendments also introduce formal definitions for "audio, visual or audio-visual information" and "synthetically generated information," covering content that is artificially created or altered using computer resources in a manner that appears realistic or indistinguishable from real persons or events.

Routine editing, accessibility improvements, and good-faith formatting, however, have been excluded from this definition.

Also Read | Apple AirPods Pro 3 Updated To Feature Infrared Cameras for 'Visual Intelligence'; Upcoming Earbuds To Maintain Same Price and Design.

"Synthetically generated information means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event," it said.

The amendments aim to address the growing risks posed by deepfakes and AI-driven misinformation, while balancing innovation with user safety and accountability. Non-compliance may attract penalties under the Information Technology Act, 2000, and other applicable criminal laws, it said.

The rules place enhanced due-diligence obligations on intermediaries, particularly prominent social media platforms. These include deploying automated tools to prevent the generation or circulation of unlawful synthetic content such as child sexual abuse material, misleading impersonations, or false electronic records.

"...includes any such synthetically generated information that "contains child sexual exploitative and abuse material, non-consensual intimate imagery content, or is obscene, pornographic, paedophilic, invasive of another person's privacy, including bodily privacy, vulgar, indecent or sexually explicit," the notification said.

Platforms must also require users to declare whether uploaded content is synthetically generated and verify such declarations, it said.

Timelines for compliance have been sharply reduced. Intermediaries must now act within three hours of receiving lawful takedown orders in certain cases, while grievance redressal and response timelines have also been shortened.

The new rules, issued by the Ministry of Electronics and Information Technology will come into force on February 20, 2026. (ANI)

(The above story is verified and authored by ANI staff, ANI is South Asia's leading multimedia news agency with over 100 bureaus in India, South Asia and across the globe. ANI brings the latest news on Politics and Current Affairs in India & around the World, Sports, Health, Fitness, Entertainment, & News. The views appearing in the above post do not reflect the opinions of LatestLY)

Share Now

Share Now