UNICEF calls for criminalization of AI content depicting child sex abuse
By Jasper Ward
Wed, February 4, 2026 at 5:20 PM UTC
2 min read
Add Yahoo as a preferred source to see more of our stories on Google.
By Jasper Ward
Feb 4 (Reuters) - The United Nations children's agency UNICEF on Wednesday called for countries to criminalize the creation of AI-generated child sexual abuse content, saying it was alarmed by reports of an increase in the number of artificial intelligence images sexualizing children.
The agency also urged developers to implement safety-by-design approaches and guardrails to prevent misuse of AI models. It said digital companies should prevent the circulation of these images by strengthening content moderation with investment in detection technologies.
Advertisement
Advertisement
"The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up," UNICEF said in a statement. Deepfakes are AI-generated images, videos, and audio that convincingly impersonate real people.
UNICEF also raised concerns about what it called the "nudification" of children, using AI to strip or alter clothing in photos to create fabricated nude or sexualized images.
At least 1.2 million children across 11 countries disclosed having their images manipulated into sexually explicit deepfakes in the past year, according to UNICEF.
Britain said on Saturday it plans to make it illegal to use AI tools to create child sexual abuse images, making it the first country to do so.
Advertisement
Advertisement
Concerns have increased in recent years about the use of AI to generate child abuse content, particularly chatbots such as xAI's Grok - owned by Elon Musk - which has come under scrutiny for producing sexualized images of women and minors.
A Reuters investigation found the chatbot continued to produce these images even when users explicitly warned the subjects had not consented.
xAI said on January 14 it had restricted image editing for Grok AI users and blocked users, based on their location, from generating images of people in revealing clothing in "jurisdictions where it's illegal." It did not identify the countries. It had earlier limited the use of Grok's image generation and editing features only to paying subscribers.
(Reporting by Jasper Ward in Washington; editing by Michelle Nichols and Rod Nickel)