Tel Aviv Derby Cancelled Due to Violent Riots
-
- By Judy Chang
- 09 Mar 2026
Tech firms and child protection agencies will receive authority to assess whether AI systems can generate child abuse images under recently introduced British laws.
The declaration coincided with findings from a safety watchdog showing that reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
Under the amendments, the government will permit designated AI companies and child protection organizations to examine AI systems – the underlying technology for conversational AI and visual AI tools – and ensure they have adequate safeguards to prevent them from producing images of child sexual abuse.
"Fundamentally about stopping exploitation before it occurs," declared the minister for AI and online safety, noting: "Experts, under strict protocols, can now identify the danger in AI systems promptly."
The amendments have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot create such content as part of a testing process. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This law is aimed at averting that problem by enabling to stop the production of those materials at their origin.
The changes are being added by the government as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, producing or distributing AI models developed to generate child sexual abuse material.
This recently, the official toured the London headquarters of a children's helpline and listened to a mock-up call to counsellors involving a report of AI-based abuse. The call depicted a adolescent seeking help after being blackmailed using a sexualised deepfake of himself, created using AI.
"When I learn about children experiencing blackmail online, it is a cause of extreme frustration in me and justified concern amongst families," he stated.
A prominent online safety organization stated that cases of AI-generated exploitation content – such as webpages that may include numerous images – had significantly increased so far this year.
Cases of category A material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
The legislative amendment could "represent a crucial step to guarantee AI tools are safe before they are launched," stated the chief executive of the internet monitoring organization.
"AI tools have enabled so victims can be victimised repeatedly with just a few clicks, giving criminals the ability to create potentially endless quantities of advanced, lifelike exploitative content," she added. "Content which further commodifies victims' trauma, and renders children, especially girls, less safe both online and offline."
Childline also released information of support sessions where AI has been mentioned. AI-related harms discussed in the conversations include:
Between April and September this year, Childline delivered 367 support sessions where AI, conversational AI and associated terms were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including using chatbots for support and AI therapy applications.
A passionate gamer and strategy enthusiast with years of experience in competitive gaming and content creation.