
https://www.unicef.org/press-releases/deepfake-abuse-is-abuse
Media contacts
“Deepfakes – images, videos, or audio generated or manipulated with Artificial Intelligence (AI) designed to look real – are increasingly being used to produce sexualised content involving children, including through “nudification,” where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualised images.
“New evidence confirms the scale of this fast-growing threat: In a UNICEF, ECPAT and INTERPOL study* across 11 countries, at least 1.2 million children disclosed having had their images manipulated into sexually explicit deepfakes in the past year. In some countries, this represents 1 in 25 children – the equivalent of one child in a typical classroom.
“Children themselves are deeply aware of this risk. In some of the study countries, up to two thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures.
“We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM). Deepfake abuse is abuse, and there is nothing fake about the harm it causes.
“When a child's image or identity is used, that child is directly victimised. Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help.
“UNICEF strongly welcomes the efforts of those AI developers that are implementing safety-by-design approaches and robust guardrails to prevent misuse of their systems. However, the landscape remains uneven, and too many AI models are not being developed with adequate safeguards. The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly.
“UNICEF urgently calls for the following actions to confront the escalating threat of AI-generated child sexual abuse material:
- All governments expand definitions of child sexual abuse material (CSAM) to include AI-generated content, and criminalise its creation, procurement, possession and distribution.
- AI developers implement safety-by-design approaches and robust guardrails to prevent misuse of AI models.
- Digital companies prevent the circulation of AI-generated child sexual abuse material – not merely remove it after the abuse has occurred; and to strengthen content moderation with investment in detection technologies, so such material can be removed immediately – not days after a report by a victim or their representative.
“The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up.”
#####
Notes for editors:
This statement reflects positions outlined in UNICEF's Guidance on AI and Children 3.0 (December 2025).
* This new data forms part of Disrupting Harm Phase 2, the second phase of a research project led by UNICEF’s Office of Strategy and Evidence – Innocenti, ECPAT International and INTERPOL, with funding from Safe Online. The project examines how digital technologies facilitate child sexual exploitation and abuse, and generates evidence to help strengthen national systems, policies and responses.
As part of this phase, national reports with country‑level findings will be released throughout 2026. The estimates presented here are based on nationally representative household surveys implemented by UNICEF and IPSOS across 11 countries. Each survey included one child aged 12–17 and one parent or caregiver, using a sampling design aimed at achieving full or near‑full national coverage (91–100%). The research was carried out across countries representing diverse regional contexts. Further methodological detail is available at: https://safeonline.global/dh2-research-methods_final-2/






