England’s children’s commissioner Dame Rachel de Souza has called for a complete ban on AI nudification apps, warning they pose a severe threat to child safety as deepfake abuse surges.
In a report published on Monday, Dame Rachel urged the government to prohibit applications that use artificial intelligence to create sexually explicit images of children. She criticised the government for allowing such apps to operate unchecked, leading to “extreme real-world consequences” for young people, particularly girls.
Deepfake Abuse Targeting Young Girls
The commissioner highlighted that AI nudification apps, which edit real photos to depict individuals naked, are disproportionately targeting girls and young women. Many of these apps are designed specifically to work on female bodies, creating a heightened climate of fear among young internet users.
According to the report, girls are increasingly avoiding sharing photos or engaging online to protect themselves, similar to the precautions they take in real life to stay safe, such as not walking home alone at night. Children expressed fear that “a stranger, a classmate, or even a friend” could use widely accessible AI tools to target them.
Government Response and New Legislation
A government spokesperson said creating, possessing, or distributing AI-generated child sexual abuse material is illegal under existing laws, and further measures are being introduced under the Online Safety Act. In February, the government announced new offences specifically targeting the use of AI to create child sexual abuse images.
However, Dame Rachel insisted the measures do not go far enough. Her spokesperson told the BBC: “There should be no nudifying apps, not just no apps that are classed as child sexual abuse generators.”
Sharp Rise in AI Child Abuse Reports
Data from the Internet Watch Foundation (IWF) revealed a 380% surge in AI-generated child sexual abuse cases, with 245 reports so far in 2024 compared to 51 in 2023. IWF Interim Chief Executive Derek Ray-Hill warned that these apps are being misused in schools, with resulting images rapidly spiralling out of control.
The Department for Science, Innovation and Technology confirmed that platforms are now legally required to remove such content or face significant fines under the Online Safety Act. The UK is the first country to introduce dedicated AI child abuse offences.
Further Action Demanded
Dame Rachel called for stronger action, including:
• Imposing legal obligations on AI tool developers to mitigate child risks
• Establishing systematic removal of deepfake sexual abuse images
• Recognising deepfake sexual abuse as violence against women and girls
Paul Whiteman, general secretary of the NAHT school leaders’ union, echoed the call for urgent review, warning that AI technology risks evolving faster than the law or education systems can respond.
Meanwhile, media regulator Ofcom recently finalised its Children’s Code, requiring websites hosting harmful content to strengthen age verification measures or face heavy penalties. Dame Rachel criticised the code for prioritising “business interests of technology companies over children’s safety.”