The Internet Watch Foundation (IWF), a UK-based internet watchdog, is once again sounding the alarm over the rapid spread of child sexual abuse material (CSAM). In a new report released Wednesday, the group reported that more than 20,254 AI-generated CSAM images were found on a single dark web platform in just one month — and such a deluge of malicious content could “overwhelm” the Internet.
As generative AI image generators become more advanced, the human ability to create realistic replicas has grown exponentially. AI image generators such as Midjourney, Runway, Stable Diffusion, and OpenAI's Dall-E are just a few of the platforms that can connect lifelike images.
These cloud-based platforms are widely accessible to the public and have implemented strict restrictions, regulations and controls to prevent their tools from being used by malicious actors to create offensive content. But AI enthusiasts regularly look for ways to get around these safeguards.
Susie Hargreaves, CEO of the foundation, said in the report, “It is important to communicate the facts of AI CSAM to a wider audience because we need to discuss the dark side of this amazing technology.”
His “worst nightmare” has come true, says the IWF, which is now tracking sexual assault victims through AI-generated CSAM. The UK group also highlighted pictures of celebrities who have aged and used them to pretend to be victims of abuse, as well as pictures of celebrity children.
“Victims may now be exposed to new images of themselves being abused in new and horrifying ways that were previously unimaginable, as if their knowledge of abuse was not enough to share in some dark corners of the internet,” Hargreaves said.
One major problem with the proliferation of AI-powered CSAM is that it can divert law enforcement resources from finding and preventing actual abuse, the IWF said.
In the year Founded in 1996, the foundation is a non-profit organization dedicated to monitoring the Internet for sexually abusive content, particularly targeting children.
In September, the IWF warned that sex rings were discussing and trading domestically to create illegal images of children using open-source AI models that could be downloaded and run on personal computers.
“Criminals can legally download everything they need to generate these images, and then produce as many images as they want offline, without any possibility of detection,” the IWF said.
The UK team called for international cooperation to combat the scourge of CSAM, including a multi-level approach, changes to relevant laws, updating law enforcement training and establishing regulatory oversight for AI models.
For AI developers, the IWF recommends banning the use of their AI to create child abuse, remove related models, and prioritize removing child abuse from their models.
“This is a global issue that requires countries to work together and ensure that legislation is fit for purpose,” Hargreaves said in a previous statement to Decrypt, explaining that the IWF has been effective in limiting CSAM in his country.
“The fact that less than 1% of criminal content is produced in the UK demonstrates our excellent working partnership with UK police forces and agencies and we are actively engaging with law enforcement on this alarming new trend,” said Hargreaves. We urge the UK Prime Minister to put this firmly on the agenda at the International AI Security Summit in England in November.
While the IWF says takedowns of dark web platforms hosting illegal CSAM are happening in the UK, the group says takedowns can be more complicated if the website is hosted in other countries.
There are several concerted efforts to combat the misuse of AI. In September, Microsoft President Brad Smith proposed using KYC policies employed by financial institutions to identify criminals who use AI models to spread misinformation and abuse.
The state of Louisiana passed a law in July that increased penalties for the sale and possession of artificially generated child pornography, saying anyone convicted of creating, distributing or possessing illegal pornographic images of minors could face five to 20 years in prison. A fine of up to $10,000, or both.
In August, the US Department of Justice updated its Citizens Guide to US federal laws on child pornography. In case of confusion, the DOJ emphasized that child pornography is not protected by the First Amendment and is illegal under federal law.
Stay on top of crypto news, get daily updates in your inbox.