British Technology Companies and Child Protection Officials to Test AI's Ability to Create Abuse Images
Tech firms and child protection agencies will be granted permission to assess whether artificial intelligence systems can generate child abuse material under recently introduced UK legislation.
Significant Rise in AI-Generated Harmful Content
The declaration coincided with findings from a safety watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the changes, the authorities will permit approved AI developers and child safety groups to examine AI models – the foundational technology for conversational AI and visual AI tools – and ensure they have sufficient protective measures to prevent them from creating images of child exploitation.
"Fundamentally about stopping exploitation before it happens," stated the minister for AI and online safety, noting: "Experts, under strict protocols, can now identify the danger in AI systems early."
Addressing Legal Obstacles
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI developers and others cannot create such content as part of a evaluation process. Until now, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to averting that issue by helping to halt the production of those materials at source.
Legislative Structure
The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on owning, producing or distributing AI models developed to create child sexual abuse material.
Real-World Consequences
This recently, the official toured the London base of a children's helpline and listened to a mock-up conversation to advisors featuring a account of AI-based abuse. The interaction portrayed a adolescent requesting help after facing extortion using a explicit deepfake of himself, created using AI.
"When I learn about children experiencing blackmail online, it is a cause of intense anger in me and justified anger amongst families," he said.
Concerning Data
A leading online safety foundation reported that instances of AI-generated abuse content – such as webpages that may contain numerous images – had significantly increased so far this year.
Cases of the most severe material – the most serious form of abuse – rose from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025
- Depictions of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "represent a vital step to guarantee AI tools are secure before they are released," commented the chief executive of the online safety foundation.
"Artificial intelligence systems have made it so survivors can be victimised all over again with just a simple actions, giving criminals the ability to create possibly endless quantities of advanced, lifelike exploitative content," she continued. "Material which additionally exploits survivors' trauma, and makes children, particularly female children, less safe on and off line."
Counseling Interaction Data
The children's helpline also released details of counselling sessions where AI has been referenced. AI-related risks mentioned in the conversations include:
- Employing AI to rate weight, physique and appearance
- Chatbots dissuading young people from talking to safe adults about abuse
- Being bullied online with AI-generated material
- Digital extortion using AI-faked images
Between April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and associated terms were discussed, four times as many as in the same period last year.
Half of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellness, encompassing using AI assistants for assistance and AI therapy applications.