The quantity of online videos depicting child sexual abuse produced by artificial intelligence has skyrocketed, with paedophiles capitalizing on advancements in technology.
The Internet Watch Foundation has stated that AI-generated videos of abuse have become nearly indistinguishable from genuine imagery, with a significant rise in their prevalence online in 2025.
In the initial six months of 2025, the UK-based internet safety organization confirmed 1,286 law-breaking videos with child sexual abuse content created by AI, compared to only two in the same interval last year.
The IWF reported that over 1,000 of the videos contained category A abuse, the designation for the most serious form of material.
The organization attributed this to the massive investment in AI resulting in highly accessible video generation models that paedophiles utilize.
As one IWF analyst remarked, “It is a very competitive industry. There is a lot of money involved, so unfortunately, there is a wide range of options for perpetrators.”
The videos were identified amidst a 400% surge in URLs featuring AI-made child sexual abuse in the first six months of 2025. This year, the IWF received reports of 210 such URLs, up from 42 the previous year, with each webpage showcasing hundreds of images and the growing presence of video content.
The IWF observed a post on a dark web forum where a paedophile expressed admiration for the rapid advancements in AI technologies, noting that they had mastered one AI tool only to be surpassed by something “new and better”.
According to IWF analysts, the images were created by taking a basic, publicly available AI model and “fine-tuning” it with CSAM (Child Sexual Abuse Material) to produce lifelike videos. In some instances, these models were fine-tuned using just a few CSAM videos, the IWF stated.
The most convincing AI abuse videos witnessed this year were based on real-life victims, the watchdog revealed.
Derek Ray-Hill, the IWF’s interim chief executive, indicated that the increasing sophistication of AI models and their widespread availability could lead to a substantial increase in AI-generated CSAM online.
“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web,” he said, noting that an uptick in such content could fuel criminal activities such as child trafficking, child sexual abuse, and modern slavery.
Furthermore, the use of existing sexual abuse victims in AI-generated imagery allows paedophiles to significantly increase the volume of CSAM online without needing to exploit new victims, he explained.
The UK government aims to combat AI-generated CSAM by making it unlawful to possess, create, or distribute AI tools designed for abuse. Individuals found in breach of the new law could face a jail term of up to five years.
Ministers are also moving to outlaw manuals that instruct potential offenders on using AI tools to generate abusive imagery or to facilitate child abuse. Violators may face imprisonment for up to three years.
In February, while announcing these measures, Home Secretary Yvette Cooper emphasized the importance of addressing both online and offline child sexual abuse.
AI-generated CSAM is already outlawed under the Protection of Children Act 1978, which criminalizes the production, distribution, and possession of an “indecent photograph or pseudo photograph” of a child.