AI Chatbots Found Generating Illegal Child Abuse Material
Fresh fears about the misuse of artificial intelligence have been ignited by the discovery of a chatbot site that offers explicit scenarios with preteen characters, disturbingly illustrated with illegal abuse images.
Disturbing Discovery by Child Safety Watchdog
A recent report from the Internet Watch Foundation (IWF), a child safety watchdog, has triggered urgent calls for the UK government to impose safety guidelines on AI companies. This comes amid a surge in child sexual abuse material (CSAM) created by this technology. The IWF was alerted to a chatbot site offering a variety of horrifying scenarios, including “child prostitute in a hotel,” “sex with your child while your wife is on holiday,” and “child and teacher alone after class.”
The IWF reported that in some instances, chatbot icons expanded into full-screen depictions of child sexual abuse imagery when clicked. These images then formed the background for subsequent chats between the bot and the user. The explicit chatbots, which role-played as children in scenarios like an eight-year-old girl trapped in a basement, were accessed through an ad on social media.
AI Generating Illegal and Photorealistic Abuse Imagery
The IWF found 17 images on the site that were AI-generated, photo-realistic, and could be considered illegal child sexual abuse material under the Protection of Children Act. The site, which the IWF has not named to prevent further harm, also gives users the option to generate more images similar to the illegal content already displayed. When IWF analysts questioned one chatbot that displayed a CSAM image, it confirmed it was designed to mimic the behavior of a preteen.
Urgent Calls for AI Safety Regulation
The UK-based IWF, which operates globally to monitor such content, insists that any forthcoming AI regulation must require child-protection guidelines to be built directly into AI models from the very beginning. Kerry Smith, the IWF’s chief executive, stated, “The UK government is making welcome strides in tackling AI-generated child sexual abuse images... but more needs to be done, and faster.”
This call for action was echoed by the child protection charity NSPCC. Its chief executive, Chris Sherwood, said, “Tech companies must introduce robust measures to ensure children’s safety is not neglected and government must implement a statutory duty of care to children for AI developers.”
The UKs Legal and Governmental Response
A user-created chatbot like this falls within the scope of the UK’s Online Safety Act, which can penalize sites with multimillion-pound fines or even have them blocked. Ofcom, the UK regulator responsible for the act, affirmed that fighting child sexual exploitation is a top priority and that service providers failing to implement necessary protections will face enforcement action.
The government has also announced plans for an AI bill and is outlawing the possession and distribution of models that generate child sexual abuse in the crime and policing bill. A government spokesperson reinforced that UK law is "crystal clear" that creating, possessing, or distributing CSAM, including AI-generated versions, is illegal.
A Surge in AI Generated Abuse Material
The IWF has reported a staggering 400% increase in reports of AI-generated abuse material in the first half of this year compared to the same period last year. The organization notes that video content in particular is surging due to rapid improvements in the underlying technology. The chatbot site in question had received tens of thousands of visits, including 60,000 in July alone.
The content is accessible in the UK but is hosted on US servers, and the site appears to be owned by a China-based company. The IWF has reported it to its US counterpart, the National Center for Missing and Exploited Children (NCMEC), for referral to law enforcement.