AI-generated child abuse webpages surge 400%, alarming watchdog

Article content
(Bloomberg) — Reports of child sexual abuse imagery created using artificial intelligence tools have surged 400% in the first half of 2025 according to new data from the UK-based nonprofit Internet Watch Foundation.
The organization, which monitors child sexual abuse material online, recorded 210 webpages containing AI-generated material in the first six months of 2025, up from 42 in the same period the year before, according to a report published this week. On those pages were 1,286 videos, up from just two in 2024. The majority of this content was so realistic it had to be treated under UK law as if it were actual footage, the IWF said.
Roughly 78% of the videos—1,006 in total—were classified as “Category A,” the most severe level, which can include depictions of rape, sexual torture, and bestiality, the IWF said. Most of the videos involved girls and in some cases used the likenesses of real children.
The growing prevalence of AI-generated child abuse material has alarmed law enforcement worldwide. As generative AI tools become more accessible and sophisticated, the quality of the pictures and videos are improving, making it harder than ever to detect using traditional techniques. While early videos were short and glitchy, the IWF now sees longer, more realistic productions featuring complex scenes and varied settings. Authorities say the content is often used for harassment and extortion.“Just as we saw with still images, AI videos of child sexual abuse have now reached the point they can be indistinguishable from genuine films,” said Derek Ray-Hill, interim chief executive of the IWF. “The children being depicted are often real and recognizable, the harm this material does is real, and the threat it poses threatens to escalate even further.”Law enforcement agencies are starting to take action. In a coordinated operation earlier this year, Europol arrested 25 individuals in connection with distributing such material. More than 250 suspects were identified across 19 countries, Bloomberg reported.The IWF called for the UK to develop a regulatory framework to ensure AI models have controls to block the production of this type of material.
In February, the UK became the first country to criminalize the creation and distribution of AI tools intended to generate child abuse content. The law bans possession of AI models optimized to produce such material, as well as manuals that instruct offenders on how to do so.
In the US, the National Center for Missing & Exploited Children—an IWF counterpart—said it received over 7,000 reports related to AI-generated child sexual abuse content in 2024.
While most commercial AI tools include safeguards against generating abusive content, some open-source or custom models lack these protections, making them vulnerable to misuse.
Postmedia is committed to maintaining a lively but civil forum for discussion. Please keep comments relevant and respectful. Comments may take up to an hour to appear on the site. You will receive an email if there is a reply to your comment, an update to a thread you follow or if a user you follow comments. Visit our Community Guidelines for more information.