UK Tech Companies and Child Safety Officials to Examine AI's Capability to Create Exploitation Images

Tech firms and child protection agencies will receive authority to assess whether AI systems can generate child abuse images under recently introduced British laws.

Substantial Rise in AI-Generated Illegal Content

The declaration coincided with findings from a safety watchdog showing that reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Framework

Under the amendments, the government will permit designated AI companies and child protection organizations to examine AI systems – the underlying technology for conversational AI and visual AI tools – and ensure they have adequate safeguards to prevent them from producing images of child sexual abuse.

"Fundamentally about stopping exploitation before it occurs," declared the minister for AI and online safety, noting: "Experts, under strict protocols, can now identify the danger in AI systems promptly."

Addressing Regulatory Obstacles

The amendments have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot create such content as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This law is aimed at averting that problem by enabling to stop the production of those materials at their origin.

Legal Framework

The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, producing or distributing AI models developed to generate child sexual abuse material.

Real-World Consequences

This recently, the official toured the London headquarters of a children's helpline and listened to a mock-up call to counsellors involving a report of AI-based abuse. The call depicted a adolescent seeking help after being blackmailed using a sexualised deepfake of himself, created using AI.

"When I learn about children experiencing blackmail online, it is a cause of extreme frustration in me and justified concern amongst families," he stated.

Concerning Data

A prominent online safety organization stated that cases of AI-generated exploitation content – such as webpages that may include numerous images – had significantly increased so far this year.

Cases of category A material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.

  • Female children were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "represent a crucial step to guarantee AI tools are safe before they are launched," stated the chief executive of the internet monitoring organization.

"AI tools have enabled so victims can be victimised repeatedly with just a few clicks, giving criminals the ability to create potentially endless quantities of advanced, lifelike exploitative content," she added. "Content which further commodifies victims' trauma, and renders children, especially girls, less safe both online and offline."

Support Session Information

Childline also released information of support sessions where AI has been mentioned. AI-related harms discussed in the conversations include:

  • Employing AI to evaluate body size, body and appearance
  • Chatbots dissuading young people from talking to trusted guardians about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-faked pictures

Between April and September this year, Childline delivered 367 support sessions where AI, conversational AI and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including using chatbots for support and AI therapy applications.

Judy Chang
Judy Chang

A passionate gamer and strategy enthusiast with years of experience in competitive gaming and content creation.