🔗 Share this article British Technology Firms and Child Protection Agencies to Test AI's Capability to Generate Exploitation Content Technology companies and child protection organizations will receive permission to evaluate whether AI tools can generate child exploitation material under recently introduced UK laws. Significant Rise in AI-Generated Harmful Content The announcement coincided with revelations from a safety monitoring body showing that reports of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025. New Legal Structure Under the changes, the government will permit designated AI companies and child safety organizations to examine AI systems – the foundational technology for conversational AI and visual AI tools – and ensure they have adequate protective measures to stop them from creating images of child exploitation. "Fundamentally about stopping abuse before it occurs," declared Kanishka Narayan, noting: "Specialists, under strict conditions, can now detect the danger in AI systems early." Tackling Legal Challenges The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot create such content as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it. This legislation is designed to preventing that issue by helping to stop the production of those images at their origin. Legal Framework The changes are being added by the government as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, producing or sharing AI models developed to generate child sexual abuse material. Real-World Consequences This week, the official toured the London headquarters of a children's helpline and listened to a simulated call to counsellors featuring a account of AI-based exploitation. The interaction depicted a teenager requesting help after facing extortion using a sexualised deepfake of himself, constructed using AI. "When I learn about young people facing extortion online, it is a cause of intense frustration in me and justified anger amongst parents," he stated. Concerning Statistics A leading online safety foundation reported that cases of AI-generated abuse material – such as online pages that may include numerous files – had significantly increased so far this year. Cases of the most severe content – the most serious form of exploitation – rose from 2,621 visual files to 3,086. Girls were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025 Depictions of infants to toddlers rose from five in 2024 to 92 in 2025 Sector Response The law change could "represent a crucial step to ensure AI tools are secure before they are launched," stated the chief executive of the online safety organization. "Artificial intelligence systems have enabled so survivors can be targeted all over again with just a few clicks, giving offenders the capability to make possibly endless quantities of advanced, photorealistic exploitative content," she added. "Content which further exploits survivors' suffering, and renders children, particularly female children, less safe on and off line." Support Interaction Data The children's helpline also released details of support interactions where AI has been mentioned. AI-related harms discussed in the conversations include: Using AI to rate weight, body and appearance Chatbots dissuading children from consulting trusted adults about abuse Facing harassment online with AI-generated material Online extortion using AI-faked pictures Between April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and related terms were discussed, four times as many as in the same period last year. Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, encompassing using AI assistants for support and AI therapy apps.