New film exposes AI’s role in online child sexual exploitation and calls for urgent global action


Posted January 16, 2025 by Jeuken

Embargoed for use until 00:01 CET Friday 17 January 2025

WeProtect Global Alliance today (17 January 2025) unveiled a compelling short film, Protect Us, exposing the harrowing ways generative AI applications and chatbots are being weaponised to exploit children online. Premiering today at the DLD Conference in Munich, the film calls on global leaders to take decisive action to safeguard children and young people.

 

“Generative AI has revolutionised the creation of hyper-realistic text, images, audio, and video, transforming the digital landscape but also introducing unimaginable dangers for children,” said Baroness Joanna Shields OBE, founder of WeProtect Global Alliance and executive producer of the film. “Predators are harnessing these technologies to create fake yet convincing sexually explicit images and child sexual abuse material (CSAM), manipulate children through grooming tactics, and infiltrate digital spaces with devastating consequences. This film lays bare the scale of the crisis and underscores the need for unified action.”

 

The urgency cannot be overstated. It is estimated that over 300 million children worldwide last year alone were victims of sexual exploitation and abuse online. But numbers, no matter how staggering, cannot convey the human cost of this crisis. Behind each statistic is a moment frozen in time—a moment of fear, shame, and helplessness that will stay with that child forever.

 

Adding to the complexity of the threat is the alarming rise in peer-on-peer harm and the proliferation of self-generated sexual images among young people. A recent report by Thorn, a WeProtect Global Alliance member, reveals that 1 in 10 children are aware of peers using generative AI to create non-consensual intimate images of others, highlighting the growing prevalence of these behaviours and their devastating impact on victims. The sheer volume of harmful content being generated is overwhelming reporting systems and hampering law enforcement investigations.

 

“The trauma of these crimes is not only deeply personal but also indelible. Images and interactions are shared and algorithmically amplified across the digital landscape, perpetuating harm long after the initial violation,” Shields said.

 

Protect Us is not just a film—it is a plea to confront this moral emergency and act decisively to prevent further harm to children. Premiering at a pivotal moment in the fight against online child sexual exploitation, the film emphasises the critical need for governments, technology companies, and society to unite in addressing this growing crisis.

 

“Children are not miniature adults. They lack the capacity to discern real from fake or safe from harmful online. We cannot expect them to navigate these dangers alone. Instead, we must build online spaces that are age-appropriate and inherently protect children and teens until they are old enough to make informed decisions.”

 

The film can be viewed at https://youtu.be/OuH-D-au1Ho

 

ADDITIONAL BACKGROUND

 

Key trends

 

·       Explosion of child sexual abuse material (CSAM): We Protect Global Alliance's recent Global Threat Assessment report revealed a rise in the use of GenAI to create child sexual abuse material since early 2023.

 

Additionally, recent reports from the Stanford Internet Observatory provide concerning statistics about AI-generated CSAM and its impact on reporting systems:

o    Proliferation of AI-Generated CSAM: A study in late 2023 revealed that some popular AI text-to-image generators were trained on datasets containing known CSAM images. This inclusion has enabled the production of photorealistic AI-generated explicit images.

o    Challenges for detection systems: AI-generated CSAM does not match traditional hash databases like PhotoDNA, which were designed for identifying previously known CSAM. This makes detection and reporting significantly more difficult.

o    Impact on reporting systems: Platforms like the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline are overwhelmed by increasing reports, many of which now involve novel AI-generated content. Law enforcement struggles with prioritisation, slowing response times and limiting resources for victim identification.

 

In 2023, the National Center for Missing and Exploited Children (NCMEC) received over 31 million reports of suspected CSAM—a staggering number that continues to climb as AI tools make it easier to create and share such material. A report by the Internet Watch Foundation (IWF) found over 20,000 AI-generated child abuse images on a dark web forum within a single month.

 

These statistics highlight the urgent need for updated detection technologies, legislative frameworks, and cross-sector collaboration to mitigate the growing risks posed by AI-enabled exploitation. 

 

·       Law enforcement overwhelmed: INTERPOL has reported a 30% increase in case backlogs, as investigators grapple with distinguishing between real victims and AI-generated material. This delay costs precious time and resources in rescuing children from harm.

 

·       Sophisticated grooming tactics: AI is enabling predators to create fake identities and deploy chatbots that mimic children’s behaviour with chilling accuracy. These tools accelerate the grooming process, making it faster and harder to detect.

 

·       Accessibility of dangerous tools: Generative AI platforms capable of producing synthetic CSAM are widely available for little to no cost, putting advanced criminal tools in the hands of anyone with an internet connection.

 

About WeProtect Global Alliance

 

WeProtect Global Alliance is a global movement bringing together more than 320 governments, private sector companies and civil society organisations working together to transform the global response to child sexual exploitation and abuse online. 

 

The Alliance is the largest and most diverse global movement dedicated to ending child sexual exploitation and abuse online. It  supports and generates political commitment and practical approaches to make the digital world safe and positive for children, preventing sexual abuse and long-term harm.  

 

Read more about the Alliance and our work at www.weprotect.org

 

Baroness Joanna Shields OBE is a technology industry veteran and the founder of the WeProtect Global Alliance. She served as the UK’s first Minister for Internet Safety & Security & UK Ambassador for Digital Industries and Digital Economy Adviser to the Prime Minister.  A life peer in the House of Lords, she champions policies that protect children online and advance ethical AI development. With over three decades of leadership experience in the tech industry, Shields has built companies, products, and platforms that have transformed markets and reshaped industries. She held senior executive roles at Facebook, Google, AOL, and RealNetworks. As a CEO, she led Bebo and Veon to successful acquisitions and transformed BenevolentAI into a leader in AI-driven drug discovery. Today, as the founder of Precognition, she advises governments and companies on creating AI-driven solutions that prioritise safety, human agency and empowerment. Shields serves on the World Economic Forum's Global Future Council on AI and the Transatlantic Commission on Election Integrity and was twice elected Co-Chair of the Global Partnership on Artificial Intelligence.

 

-- END ---
Share Facebook Twitter
Print Friendly and PDF DisclaimerReport Abuse
Contact Email [email protected]
Issued By WeProtect Global Alliance
Website More information
Country United Kingdom
Categories News , Parenting , Technology
Tags artificial intelligence , ai , generative ai , child abuse , child exploitation , social media
Last Updated January 16, 2025