World News | AI-generated Sexualised Images Depicting Children Constitute Child Sexual Abuse, Must Be Criminalised: UNICEF
Get latest articles and stories on World at LatestLY. In a statement, UNICEF said it was increasingly concerned by reports showing a surge in AI-generated sexualised images, including cases where real photographs of children are manipulated and sexualised through deepfake technology.
New York [US], February 6 (ANI): Artificial Intelligence (AI)-generated sexualised images depicting children constitute child sexual abuse and must be criminalised, UNICEF said, warning of a rapid and alarming rise in the misuse of AI tools to create abusive content.
In a statement, the UN agency responsible for providing humanitarian and developmental aid to children worldwide said it was increasingly concerned by reports showing a surge in AI-generated sexualised images, including cases where real photographs of children are manipulated and sexualised through deepfake technology.
"Sexualised images of children generated or manipulated using AI tools are child sexual abuse material. Deepfake abuse is abuse, and there is nothing fake about the harm it causes," UNICEF said.
UNICEF said deepfakes (images, videos or audio generated or altered using AI to appear real) are being used to produce sexualised content involving children, including through so-called "nudification", where AI tools digitally remove or alter clothing to fabricate nude or sexual images.
Citing new evidence, UNICEF said a joint study conducted with ECPAT and INTERPOL across 11 countries found that at least 1.2 million children disclosed that their images had been manipulated into sexually explicit deepfakes over the past year. In some countries, this amounted to one in 25 children, roughly one child in a typical classroom.
The UN agency said children themselves are acutely aware of the threat. In some countries surveyed, up to two-thirds of children reported worrying that AI could be used to create fake sexual images or videos of them, highlighting the urgent need for stronger awareness, prevention and protection measures.
UNICEF stressed that even when an identifiable victim is not immediately apparent, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and creates major challenges for law enforcement in identifying and protecting victims.
"When a child's image or identity is used, that child is directly victimised," the organisation said.
While welcoming steps taken by some AI developers to adopt safety-by-design approaches and implement safeguards to prevent misuse, UNICEF said protections across the industry remain uneven. It warned that risks are amplified when generative AI tools are embedded into social media platforms, enabling manipulated images to spread rapidly.
UNICEF called on governments worldwide to expand legal definitions of child sexual abuse material to explicitly include AI-generated content and to criminalise its creation, possession, procurement and distribution.
The agency also urged AI developers to implement robust safeguards and digital platforms to prevent the circulation of such material, rather than removing it only after abuse has occurred. It said stronger content moderation and investment in detection technologies were essential to ensure immediate removal of abusive material.
"The harm from deepfake abuse is real and urgent," UNICEF said. "Children cannot wait for the law to catch up." (ANI)
(The above story is verified and authored by ANI staff, ANI is South Asia's leading multimedia news agency with over 100 bureaus in India, South Asia and across the globe. ANI brings the latest news on Politics and Current Affairs in India & around the World, Sports, Health, Fitness, Entertainment, & News. The views appearing in the above post do not reflect the opinions of LatestLY)