Artificial intelligence technology is being used by predators to generate thousands of images depicting children, some under the age of two, being subjected to “the worst kinds of sexual abuse”.
The faces of bodies of real children are being built into dataset models in order to reproduce new pictures of minors, and the technology can even “nudify” kids whose harmless clothed images are shared on social media by their loved ones.
On a single dark web forum in just one month, the Internet Watch Foundation in the United Kingdom found more than 11,000 child abuse images, and one-in-five depicted primary school-aged kids.
More than 140 showed children between the ages of three and six, while two depicted babies.
The flood of AI-generated child abuse materials has law enforcement agencies alarmed. Picture: iStock
The rapid rise of AI over the past year, since the first ChapGPT model was released publicly, has alarmed law enforcement authorities across the world, and the IWF warns its nefarious use threatens to “overwhelm the internet”.
And even though the kids depicted aren’t technically real children, it’s far from harmless.
An alarming report from the IWF shows most AI-generated child abuse material is “now realistic enough to be treated as real imagery” under the law.
Susie Hargreaves, Chief Executive of the IWF, said the highly convincing material is her “worst nightmare come true”.
“Earlier this year, we warned AI imagery could soon become indistinguishable from real pictures of children suffering sexual abuse, and that we could start to see this imagery proliferating in much greater numbers,” Ms Hargreaves said.
“Chillingly, we are seeing criminals deliberately training their AI on real victims’ images who have already suffered abuse. Children who have been raped in the past are now being incorporated into new scenarios because someone, somewhere, wants to see it.”
A growing problem
While AI-generated child abuse material still represents a small number of the total amount of sickening content processed by the IWF, the group’s Chief Technology Officer Dan Sexton said it’s on the rise.
There are many risks posed by the booming trend, including the potential to gradually “increase normalisation of child sexual abuse”.
“It could have an effect on pathways to abuse, as those that view content could go on to create content, to contact and abuse children remotely or in person,” Mr Sexton said.
The trend is far from harmless and must be stamped out, watch groups say. Picture: Getty
Child abuse material is prolific on the dark web, but it also exists out in the open elsewhere on the internet, he said. Between May and June this year, the IWF confirmed seven URLs containing AI-produced material on the open web.
“There is great concern over the effect on the viewer and the consequences of increasing qualities and availability of AI-generated child sexual abuse content,” Mr Sexton said.
“I would expect the effect of photorealistic imagery, indistinguishable from real children, on those that view the content intentionally or accidentally to be the same as viewing real content.
“The same damaging effects of exposing people on the internet – including potentially children – to shocking and traumatic content remain.”
Mr Sexton said there is also “a massive potential” for large numbers of ‘deepfake’ images of existing victims and children that have never been victims.
“We have already seen examples of children creating sexual images of their peers, and children contacting IWF reporting that child sexual abuse material has been created of them, showing abuse that never happened.”
AI-generated child abuse material detected by the IWF depicts kids as young as two.
Tech giants TikTok and Snapchat this week signed a pledge to tackle the rise of AI-generated child abuse material, as part of an initiative by the United States and United Kingdom governments.
“Child sexual abuse images generated by AI are an online scourge,” Britain’s Home Secretary Suella Braverman said. “The pace at which these images have spread online is shocking.
“This is why tech giants must work alongside law enforcement to clamp down on their spread. The pictures are computer-generated, but they often show real people it’s depraved and damages lives.”
Fresh challenge for law enforcement
Another risk is a flood of AI-generated child abuse material could quickly overwhelm law enforcement agencies that are tasked with finding and removing such imagery.
“It is harder to distinguish between real child sexual abuse material and AI-generated, which risks law enforcement chasing after generated victims or not finding and safeguarding real children,” Mr Sexton said.
Queensland Police’s Taskforce Argos – the nation’s leading defence against child abuse – is aware of the boom in AI-generated material flooding the internet.
As well as feeding the sick desires of predators, the trend also “poses a significant risk of children becoming victims of sextortion and cyber-bullying”.
“Argos covert operatives and victim identification analysts operate on a range of platforms across the internet, including infiltrating child sex offender forums, to keep pace with online threats and trends relating to child sexual abuse,” a Queensland Police spokesperson said.
UP NEXTBraverman: AI could enable criminals to share child abuse
Australian
authorities continually collaborate with international counterparts,
sharing intelligence and tools as the threat evolves.
“It is recognised that AI generated child exploitation material will pose challenges for law enforcement and risks hindering our attempts to identify and rescue children from abuse.
“However, technology does exist to assist law enforcement in identifying and differentiating AI generated material from real images.
“Any depiction of child exploitation material, whether real or generated by AI, is a criminal offence in
Queensland and offenders will be prosecuted by the Queensland Police Service.”
The service urged parents and carers to speak with children about online safety and their digital footprint.
It’s important that “a child knows that nothing is so embarrassing or serious they cannot seek help from a trusted adult”.
“Members of the public are encouraged to report seriously harmful online abuse and illegal or restricted content to the eSafety Commissioner.
Tips can also be provided to the Australian Centre to Counter Child Exploitation, which last year received more than 40,000 reports of online child exploitation.
Using AI for good
The chilling nefarious uses of AI have law enforcement agencies concerned, but the rapidly advancing technology can also offer a new tool to their arsenals.
The Australian Federal Police is developing what it calls an “ethical AI” system to detect child sexual abuse material in videos or photos shared on the dark web or seized during criminal investigations.
“The AI tool will be a significant breakthrough for investigators who often have to manually look through tens of thousands of files to find evidence of suspected child abuse material,” an AFP spokesperson said.
“It means the AI tool will be able to quickly detect child abuse material on websites or offenders’ electronic devices and triage them for investigators.”
Australian authorities are using AI for good to fight against child abuse.
AFP Deputy Commissioner Lesa Gale said the system could be used in a number of ways, including trawling websites suspected of sharing abuse material.
A nationwide call for assistance from the public in developing the tool was made in September, calling on Aussies to provide childhood photographs of themselves.
“By having access to ordinary, everyday photographs, the AI tool will be trained to look for what is different and identify unsafe situations, flagging potential child sexual abuse material,” Deputy Commissioner Gale said.
“But for it to work in the most effective way, we need about 100,000 pictures of Australians aged between zero and 17, and of all ethnicities.
“This initiative was officially launched in June 2022, but for this foundational phase to succeed, we need 10,000 pictures. Currently, we have fewer than 1000 images, but we hope to be able to use the AI tool within 12 months.”
The AFP and Monash University, which is assisting in the development of the technology, need adults to submit photos of themselves as kids – not of their own children, because consent is crucial.
“This enables development of technology that is both ethically accountable and transparent,” Deputy Commissioner Gale said.
“We also do not want to source images from the internet because children in those pictures have not consented for their photographs to be uploaded or used for research.”
Aussies are being urged to help fight against child abuse material. Picture: Getty
Monash University will wholly own and manage the dataset and AFP use will be “subject to the same transparency and accountability measures that apply for all researchers”.
“People are also free to withdraw their childhood photos from the dataset if they change their mind,” she said.