Research published by Anglia Ruskin University in the UK has revealed a growing demand for AI-generated CSAM on dark web forums.
Researchers Dr. Deanna Davy and Prof. Sam Lundrigan analyzed conversations from these forums over the past year, discovering a troubling pattern of users actively learning and sharing techniques to create such material using AI tools.
“We found that many of the offenders are sourcing images of children in order to manipulate them, and that the desire for ‘hardcore’ imagery escalating from ‘softcore’ is regularly discussed,” Dr. Davy explains in a blog post.
This dispels the misconception that AI-generated images are “victimless,” as real children’s images are often used as source material for these AI manipulations.
The study also found that forum members referred to those creating AI-generated CSAM as “artists,” with some expressing hope that the technology would evolve to make the process easier.
This mindset highlights the normalization of such criminal behavior within these online communities.
Professor Lundrigan added, “The conversations we analysed show that through the proliferation of advice and guidance on how to use AI in this way, this type of child abuse material is escalating and offending is increasing. This adds to the growing global threat of online child abuse in all forms, and must be viewed as a critical area to address in our response to this type of crime.”
Man arrested for illicit AI image production
In a related case reported by the BBC on the same day, Greater Manchester Police (GMP) recently announced what they describe as a “landmark case” involving the use of AI to create indecent images of children.
Hugh Nelson, a 27-year-old man from Bolton, admitted to 11 offenses, including the distribution and making of indecent images, and is due to be sentenced on September 25th.
Detective Constable Carly Baines from GMP described the case as “particularly unique and deeply horrifying,” noting that Nelson had transformed “normal everyday photographs” of real children into indecent imagery using AI technology. “
This case is a first in our area and is a landmark case nationally,” she added.
The case against Nelson has highlighted the challenges law enforcement faces in dealing with this new form of digital crime.
GMP described it as a “real test of legislation,” as the use of AI in this manner is not specifically addressed in current UK law. DC Baines expressed hope that this case would “play a role in influencing what future legislation looks like.”
These developments come in the wake of several other high-profile cases involving AI-generated CSAM.
In April, a Florida man was charged for allegedly using AI to generate explicit images of a child neighbor. Last year, a North Carolina child psychiatrist was sentenced to 40 years in prison for creating AI-generated abusive material from his child patients.
More recently, the US Department of Justice announced the arrest of 42-year-old Steven Anderegg in Wisconsin for allegedly creating more than 13,000 AI-generated abusive images of children.
Adding to the issue’s complexity, a Stanford University report revealed that hundreds of real CSAM images were included in the LAION-5B database used to train popular AI tools.
Once the database was made open-source, experts say the creation of AI-generated CSAM exploded.
This also raised serious questions about AI developers’ responsibility in vetting their training data.
The rise of AI-generated CSAM also poses new challenges for content moderation on social media platforms and other online spaces.
Traditional detection methods may struggle to identify AI-generated imagery, potentially allowing such content to proliferate more easily.
Experts are calling for a multi-faceted approach to address this growing threat. This includes:
Updating legislation to specifically address AI-generated CSAM.
Enhancing collaboration between tech companies, law enforcement, and child protection organizations.
Developing more sophisticated AI detection tools to identify and remove AI-generated CSAM.
Increasing public awareness about the harm caused by all forms of CSAM, including AI-generated content.
Providing better support and resources for victims of child sexual abuse, including those affected by the AI-manipulation of their images.
Implementing stricter vetting processes for AI training datasets to prevent the inclusion of CSAM.
These measures have proven ineffective as of yet.
To see material improvement, both the way abusive AI-generated images can fly under the technical radar while occupying a grey area in legislation, and the way they can be manipulated will need to be addressed.