UNICEF Warns of Alarming Rise in AI-Generated Sexualized Deepfakes Targeting Children

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 02/19/2026
In News

UNICEF has issued an urgent warning about the rapid global increase in AI-generated sexualized images of children, describing the phenomenon as a growing form of child sexual abuse that demands immediate action from governments, technology companies, and AI developers.

 

The organization said artificial intelligence tools are increasingly being used to manipulate photos of children into sexually explicit deepfakes, including so-called “nudification,” where AI alters images to fabricate nude or sexualized content. UNICEF emphasized that such material constitutes child sexual abuse material (CSAM), regardless of whether the images depict real or artificially generated scenarios.

 

New research conducted jointly by UNICEF, ECPAT, and INTERPOL across 11 countries highlights the scale of the threat. At least 1.2 million children reported having their images manipulated into sexually explicit deepfakes in the past year. In some countries, this equates to approximately one in every 25 children, or the equivalent of one child in a typical classroom.

 

The findings also show children are increasingly aware of the risks. In several surveyed countries, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos of them. UNICEF said the growing prevalence of these tools, especially when integrated into social media platforms, increases the speed and scale at which abusive content can spread.

 

UNICEF warned that even when AI-generated content does not depict an identifiable victim, it can normalize child exploitation, fuel demand for abusive material, and complicate law enforcement efforts to identify and protect victims.

 

The organization called on governments to expand legal definitions of child sexual abuse material to explicitly include AI-generated content and to criminalize its creation, possession, and distribution. UNICEF also urged AI developers to implement safety-by-design safeguards and for digital platforms to proactively detect and prevent such material from circulating.

 

“The harm from deepfake abuse is real and urgent,” UNICEF said. “Children cannot wait for the law to catch up.”


Need Help?

 

If you have questions or concerns about any global guidelines, regulations and laws, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter