AI-generated child sex abuse images targeted with new laws

1 day ago 5

Sima Kotecha

Senior UK Correspondent

Four new laws will tackle the threat of child sexual abuse images generated by artificial intelligence (AI), the government has announced.

The Home Office says that, to better protect children, the UK will be the first country in the world to make it illegal to possess, create or distribute AI tools designed to create child sexual abuse material (CSAM), with a punishment of up to five years in prison.

Possessing AI paeodophile manuals will also be made illegal, and offenders will get up to three years in prison. These manuals teach people how to use AI to sexually abuse young people.

"We know that sick predators' activities online often lead to them carrying out the most horrific abuse in person," said Home Secretary Yvette Cooper.

"This government will not hesitate to act to ensure the safety of children online by ensuring our laws keep pace with the latest threats."

The other laws include making it an offence to run websites where paedophiles can share child sexual abuse content or provide advice on how to groom children. That would be punishable by up to 10 years in prison.

And the Border Force will be given powers to instruct individuals who they suspect of posing a sexual risk to children to unlock their digital devices for inspection when they attempt to enter the UK, as CSAM is often filmed abroad. Depending on the severity of the images, this will be punishable by up to three years in prison.

Artificially generated CSAM involves images that are either partly or completely computer generated. Software can "nudify" real images and replace the face of one child with another, creating a realistic image.

In some cases, the real-life voices of children are also used, meaning innocent survivors of abuse are being re-victimised.

Fake images are also being used to blackmail children and force victims into further abuse.

The National Crime Agency (NCA) said it makes around 800 arrests each month relating to threats posed to children online. It said 840,000 adults are a threat to children nationwide - both online and offline - which makes up 1.6% of the adult population.

Cooper said: "These four new laws are bold measures designed to keep our children safe online as technologies evolve.

"It is vital that we tackle child sexual abuse online as well as offline so we can better protect the public," she added.

Some experts, however, believe the government could have gone further.

Prof Clare McGlynn, an expert in the legal regulation of pornography, sexual violence and online abuse, said the changes were "welcome" but that there were "significant gaps".

The government should ban "nudify" apps and tackle the "normalisation of sexual activity with young-looking girls on the mainstream porn sites", she said, describing these videos as "simulated child sexual abuse videos".

These videos "involve adult actors but they look very young and are shown in children's bedrooms, with toys, pigtails, braces and other markers of childhood," she said. "This material can be found with the most obvious search terms and legitimises and normalises child sexual abuse. Unlike in many other countries, this material remains lawful in the UK."

The Internet Watch Foundation (IWF) warns that more sexual abuse AI images of children are being produced, with them becoming more prevalent on the open web.

The charity's latest data shows reports of CSAM have risen 380% with 245 confirmed reports in 2024 compared with 51 in 2023. Each report can contain thousands of images.

In research last year it found that over a one-month period, 3,512 AI child sexual abuse and exploitation images were discovered on one dark website. Compared with a month in the previous year, the number of the most severe category images (Category A) had risen by 10%.

Experts say AI CSAM can often look incredibly realistic, making it difficult to tell the real from the fake.

The interim chief executive of the IWF, Derek Ray-Hill, said: "The availability of this AI content further fuels sexual violence against children.

"It emboldens and encourages abusers, and it makes real children less safe. There is certainly more to be done to prevent AI technology from being exploited, but we welcome [the] announcement, and believe these measures are a vital starting point."

Lynn Perry, chief executive of children's charity Barnardo's, welcomed government action to tackle AI-produced CSAM "which normalises the abuse of children, putting more of them at risk, both on and offline".

"It is vital that legislation keeps up with technological advances to prevent these horrific crimes," she added.

"Tech companies must make sure their platforms are safe for children. They need to take action to introduce stronger safeguards, and Ofcom must ensure that the Online Safety Act is implemented effectively and robustly."

The new measures announced will be introduced as part of the Crime and Policing Bill when it comes to parliament in the next few weeks.

Visit Source