The Multimillion Dollar Industry of AI Image Abuse
The Rise of a Harmful and Lucrative Online Niche
For years, a dark corner of the internet has been expanding, populated by so-called "nudify" apps and websites. These services allow users to generate nonconsensual and abusive explicit images of women and girls, and in some cases, even create AI-generated child sexual abuse material. Despite some efforts by lawmakers and tech companies to curb these harmful platforms, millions of people continue to access them every month. New research suggests that the creators of these sites are potentially earning millions of dollars annually.
Big Tech's Role in Fueling Abusive AI
A recent analysis of 85 such "undress" websites has uncovered a startling reality: the majority of these sites depend on technology and services from major US companies, including Google, Amazon, and Cloudflare. The investigation, published by Indicator, a publication focused on digital deception, found that these sites collectively received an average of 18.5 million visitors each month over the past six months and could be generating up to $36 million per year.
Alexios Mantzarlis, a cofounder of Indicator, describes the nudifier ecosystem as a "lucrative business" enabled by "Silicon Valley’s laissez-faire approach to generative AI." He argues that tech companies should have cut off services to these platforms as soon as it became clear their primary use was sexual harassment. The creation and sharing of such explicit deepfakes is increasingly becoming illegal in many jurisdictions.
The research details that Amazon and Cloudflare provide hosting or content delivery network (CDN) services for 62 of the 85 websites analyzed. Meanwhile, Google's sign-on system was found on 54 of the sites, alongside other mainstream services for payments and operations.
In response, Amazon Web Services spokesperson Ryan Walsh stated that AWS has clear terms of service and acts quickly to review and disable prohibited content when reported. Similarly, Google spokesperson Karl Ryan noted, “Some of these sites violate our terms, and our teams are taking action to address these violations,” adding that the company is also working on longer-term solutions. Cloudflare did not respond to requests for comment.
The Devastating Human Impact
These nudify bots and websites have flourished since 2019, evolving from the same tools used to create the first deepfake pornography. As reporting from Bellingcat has shown, these services are often run by interconnected networks of companies profiting from the technology.
The damage caused is immense. Photos are stolen from social media to create abusive images. In schools, this has emerged as a new form of cyberbullying, with teenage boys creating deepfakes of their classmates. For victims, the experience is harrowing, and removing the images from the web can be an incredibly difficult battle.
Evolving Tactics and an Expanding Market
The business model is simple: sell subscriptions or credits to generate images. The researchers estimate that just 18 of the websites analyzed made between $2.6 million and $18.4 million in six months. This is likely a conservative figure, as it doesn't account for all transactions. Leaked data has suggested one prominent site has a multimillion-dollar budget, while another has claimed to have made millions.
The top five countries accessing these sites are the United States, India, Brazil, Mexico, and Germany. The sites are also becoming more sophisticated in their marketing, using paid affiliate and referral programs and even sponsoring videos with adult entertainers. They are also adapting to avoid crackdowns, for instance, by using "intermediary sites" to mask their identity when using Google's sign-in system, a tactic that was previously targeted after initial reporting.
A Slow but Emerging Response
While action has been slow, there are recent signs of a crackdown. San Francisco’s city attorney has sued 16 such services, Microsoft has identified developers evading its guardrails, and Meta has filed a lawsuit against a company that repeatedly advertised on its platforms. New legislation, like the Take It Down Act in the US, aims to force faster removal of nonconsensual imagery.
Experts believe these are steps in the right direction but that more comprehensive action is needed. Henry Ajder, an AI and deepfakes expert, emphasizes that progress will only be made when the businesses facilitating this "perverse customer journey" take targeted action.
Mantzarlis concludes that if major tech companies enforce their policies more strictly, the ability of these sites to thrive will diminish. “Yes, this stuff will migrate to less regulated corners of the internet—but let it,” he says. “If websites are harder to discover, access, and use, their audience and revenue will shrink. Unfortunately, this toxic gift of the generative AI era cannot be returned. But it can certainly be drastically reduced in scope.”