AI Deepfakes And Child Safety The Legal Response
The Alarming Rise of AI Generated Child Abuse Material
A recent case in Carbondale has brought a chilling reality to the forefront: a woman was charged with possessing child sexual abuse material found on her cellphone. While the three third-degree felony charges are severe, the images themselves were not of real children. Instead, they were products of artificial intelligence, meticulously crafted from existing images and detailed descriptions to create new, disturbingly realistic digital depictions.
This Lackawanna County case marks a first of its kind, made possible by legislation enacted just months prior. The incident underscores law enforcement's ongoing battle to keep pace with rapidly evolving technology and a problem of escalating proportions.
Statistics from the National Center for Missing & Exploited Children’s CyberTipline are stark. In 2024, reports involving generative AI skyrocketed by an astounding 1,325%, jumping from 4,700 in 2023 to 67,000 reports last year. The use of AI to create illegal and harmful images is increasingly making headlines.
Pennsylvania Takes a Stand New Legislation Against AI CSAM
In October, Governor Josh Shapiro signed a pivotal bill into law, Act 125 of 2024. This legislation, which passed unanimously in both the state House and Senate, significantly expands the definition of child sex abuse materials to explicitly include AI-generated images of children, whether they are identifiable or not.
Crucially, Act 125 also revises legal terminology, replacing "child pornography" with "child sexual abuse material." Lawmakers argued this change more accurately describes the chargeable offenses, moving away from language that might inadvertently downplay the harm or suggest the material is anything less than abusive.
The Human Cost When AI Distorts Reality
A notable example of AI's misuse is the case of psychiatrist David Tatum. In May 2023, he was convicted in the Western District of North Carolina for possessing child sexual materials. Tatum had used AI to transform innocent childhood photographs, some taken decades earlier of his classmates, into pornographic images.
During the trial, some of these former classmates provided powerful victim impact statements. They described how treasured memories, captured in photos from first days of school or sporting events, were horrifically twisted into images that now evoke fear and distrust, forever tainting those once-innocent recollections. Tatum received a 40-year prison sentence for his actions.
Closing Legal Loopholes Tackling Unidentifiable AI Images
Investigators in the Tatum case were able to build their case because the individuals in the doctored photos could be identified. They reached out to victims, heard their stories, and presented compelling evidence.
However, law enforcement faced a significant hurdle: what happens when the individuals in AI-generated images are entirely fictional or unidentifiable compilations? Previously, this often led to an impasse. Officers knew the possessor's intent was to have illegal imagery, but without a traceable, real victim, prosecution was challenging, and the material itself sometimes fell into a legal gray area.
Over the past several months, both state and federal governments have addressed this critical concern. New laws now explicitly make it illegal to possess computer-generated child sexual abuse images, even if the depicted individuals cannot be traced back to a specific person.
Lackawanna County District Attorney Brian Gallagher
Peters
Ismail Onat, Associate Professor University of Scranton (SUBMITTED)
Lackawanna County District Attorney Brian Gallagher anticipates that the Carbondale case is just the beginning. “This is the first AI-generated child pornography case that we’ve prosecuted in Lackawanna County, but unfortunately, it will not be the last,” he stated. “Lackawanna County law enforcement will continue to be proactive and aggressively investigate, arrest and prosecute child predators and those who manufacture child sexual abuse material.”
The defendant in the Carbondale case, Sandra Rogers, 35, was charged on May 12 and waived her preliminary hearing on May 27, moving the case to county court. Each third-degree felony charge carries a potential maximum sentence of seven years in prison and a $15,000 fine if she is convicted.
Law Enforcement Gains New Tools to Combat AI Predators
Wyoming County District Attorney Joe Peters emphasized that the new legislation provides a vital additional tool for law enforcement to identify and prosecute predators. “You can now bring charges, even when there was never a real person involved,” he explained. “Before the changes in the law there would have been challenges to that sort of child pornography. In fact that term has been redone to more accurately reflect what’s going on. Because, in some cases there is no child and the pornography didn’t actually happen — it’s AI generated, we can still charge it.”
Peters further noted that the law broadens law enforcement's reach against technology weaponized for extortion, humiliation, profit, or sexual gratification. This includes instances where criminals superimpose an innocent person’s face onto another's body in explicit contexts. Human traffickers frequently employ such tactics, grooming young women with flirtatious conversation before requesting a photo. “Once you send that photo, they own you,” Peters warned.
State Sen. Tracy Pennycuick, a Republican representing Berks and Montgomery counties and a sponsor of the state legislation, described the use of AI to produce child sexual abuse images as something that “shocks the conscience.” As chair of the Senate Communications and Technology Committee, she asserted, “We need to do everything we can to prevent individuals from using AI for these insidious purposes. By working in a bipartisan manner, the General Assembly is making it clear that Pennsylvania is not the place for this depraved activity.”
Federal Action The Take it Down Act Targets AI Exploitation
Ismail Onat, an associate professor of criminal justice, cybersecurity, and sociology at the University of Scranton, highlighted that federal law is also adapting to these emerging technological threats. The Take it Down Act, signed into law by President Donald Trump on May 19, makes it a federal crime to post not only sexually explicit images online without consent but also images generated by artificial intelligence.
First lady Melania Trump, instrumental in guiding the legislation through Congress, hailed it as a “national victory” that helps protect children from online exploitation and deters the creation of fake images using AI. “AI and social media are the digital candy for the next generation, sweet, addictive and engineered to have an impact on the cognitive development of our children,” she commented. “But unlike sugar, these new technologies can be weaponized, shape beliefs and, sadly, affect emotions and even be deadly.”
Onat concurs that in a world where AI can generate highly realistic images of nonexistent children in sexual scenarios, often termed “deepfakes,” this legislation is a significant step in the right direction.
Expert Perspectives Adapting to the Evolving AI Threat
Professor Onat, who also directs the university’s Center for the Prevention & Analysis of Crime, observed that law enforcement has always been tasked with adapting to new inventions and technologies, a challenge that intensifies as the pace of innovation accelerates. He drew a parallel to the invention of the automobile: while offering unprecedented mobility, it also created opportunities for numerous infractions, from speeding to unsafe vehicle operation. Law enforcement first had to understand the nature of this new invention before implementing governing laws.
Similarly, as artificial intelligence evolves and becomes more sophisticated, government and law enforcement agencies must react swiftly. Because these agencies did not create the technology, Onat explained, it takes time for them to fully understand its implications and potential for misuse. He credited state and federal law enforcement agencies for their consistent efforts in identifying and addressing new technologies as they emerge. The FBI, for instance, actively investigates and prosecutes cases related to artificial intelligence, particularly when it involves child sexual abuse material.
Educating the Next Generation of Cybercrime Fighters
Higher education is also evolving to meet the demand for oversight of new technologies. The University of Scranton now offers a cybercrime and homeland security major, specifically designed to address the need to investigate and protect information in cyberspace. The university's website describes the program's goal: “to form the cybercrime investigators, digital forensic examiners, information security analysts, and national security analysts of tomorrow.”
This program includes an internship allowing students to apply their cybersecurity skills at the Lackawanna County district attorney’s office. Students gain hands-on experience extracting data from cellphones, analyzing it, evaluating findings with detectives, and presenting information to the prosecution team before trial. This initiative aims to equip students to assist law enforcement in keeping pace with ever-evolving technology—a force that, while beneficial to many, is also frequently misused by criminals.