UK Tech Firms and Child Protection Officials to Examine AI's Capability to Generate Exploitation Images

Tech firms and child safety organizations will receive permission to evaluate whether artificial intelligence tools can generate child exploitation images under recently introduced British laws.

Significant Rise in AI-Generated Illegal Material

The declaration came as revelations from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the amendments, the government will allow designated AI companies and child protection organizations to examine AI models – the underlying technology for conversational AI and visual AI tools – and verify they have sufficient protective measures to prevent them from producing images of child sexual abuse.

"Fundamentally about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now identify the danger in AI systems early."

Tackling Legal Obstacles

The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing process. Previously, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This legislation is aimed at preventing that problem by enabling to stop the production of those images at source.

Legal Structure

The changes are being added by the authorities as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, creating or sharing AI systems developed to create exploitative content.

Real-World Impact

This recently, the minister toured the London base of a children's helpline and listened to a mock-up call to counsellors featuring a account of AI-based exploitation. The interaction portrayed a adolescent requesting help after being blackmailed using a explicit deepfake of himself, created using AI.

"When I hear about young people facing blackmail online, it is a source of intense anger in me and rightful concern amongst families," he said.

Concerning Data

A leading online safety organization stated that instances of AI-generated abuse material – such as online pages that may include numerous files – had more than doubled so far this year.

Instances of category A content – the most serious form of abuse – rose from 2,621 visual files to 3,086.

  • Girls were overwhelmingly targeted, making up 94% of illegal AI images in 2025
  • Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "constitute a vital step to guarantee AI tools are safe before they are released," stated the chief executive of the internet monitoring organization.

"AI tools have enabled so survivors can be targeted repeatedly with just a simple actions, giving criminals the capability to create possibly endless amounts of advanced, photorealistic child sexual abuse material," she added. "Content which further exploits survivors' trauma, and renders children, especially girls, less safe on and off line."

Support Interaction Information

Childline also released information of counselling sessions where AI has been referenced. AI-related risks mentioned in the conversations comprise:

  • Using AI to evaluate weight, body and looks
  • Chatbots discouraging children from consulting safe guardians about harm
  • Facing harassment online with AI-generated material
  • Online extortion using AI-faked pictures

During April and September this year, Childline delivered 367 support interactions where AI, conversational AI and related topics were discussed, four times as many as in the same period last year.

Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, encompassing utilizing AI assistants for support and AI therapy apps.

Lisa Rice
Lisa Rice

A food industry analyst with over a decade of experience, specializing in consumer trends and product reviews.