The UK government has introduced sweeping new legislation to prevent artificial intelligence (AI) systems from being misused to create synthetic child sexual abuse images, after new data showed reports of AI-generated material more than doubled over the past year.
According to figures released Wednesday by the Internet Watch Foundation (IWF), reports of AI-generated child sexual abuse material rose from 199 in 2024 to 426 in 2025. The organization also warned of a sharp increase in images depicting infants aged 0–2, which surged from just five cases last year to 92 this year. The severity of the material has intensified as well, with Category A content — the most serious level — rising to 56% of all confirmed illegal AI imagery.
The new laws, announced by the Department for Science, Innovation and Technology, will allow the government to designate AI developers and child protection groups such as the IWF as authorised testers. The move will give these bodies legal protections to examine AI systems for vulnerabilities and ensure safeguards are in place before models are released.
Currently, the creation or possession of child sexual abuse material — including AI-generated content — is illegal, making it difficult for developers to test models without risking criminal liability. Officials say the legislation is one of the first of its kind worldwide and aims to “build safety in at the source” rather than removing harmful content after it appears online.
Technology Secretary Liz Kendall said the government “will not allow technological advancement to outpace our ability to keep children safe,” while Safeguarding Minister Jess Phillips said the measure would prevent legitimate AI tools from being manipulated to produce “vile material.”
The amendment will be added to the Crime and Policing Bill and includes provisions to test models for other risks, including extreme pornography and non-consensual intimate images. The government will also convene a group of AI and child-safety experts to oversee secure testing practices.
IWF Chief Executive Kerry Smith welcomed the legislation, calling it a “vital step” toward ensuring AI products are safe before they reach the public.
Need Help?
If you’re concerned or have questions about how to navigate the UK or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.


