Meta Grapples With Nudify Deepfake Ads On Its Platforms
Meta Under Fire Deepfake Ads Plague Platforms
Meta has taken action to remove numerous advertisements promoting "nudify" apps after a CBS News investigation uncovered hundreds of these ads on its platforms. These AI tools are used to create sexually explicit deepfakes using images of real people. In response to the findings, a Meta spokesperson stated, "We have strict rules against non-consensual intimate imagery; we removed these ads, deleted the Pages responsible for running them and permanently blocked the URLs associated with these apps."
Investigation finds social media companies help enable explicit deepfakes with ads for AI tools (03:57)
The Disturbing Details Nudify Ads on Instagram
CBS News found dozens of these ads within Instagram's "Stories" feature. The ads promoted AI tools that often advertised the capability to "upload a photo" and "see anyone naked." Other advertisements in Instagram Stories showcased tools for manipulating videos of real individuals. One particularly brazen ad featured text reading "how is this filter even allowed?" displayed beneath an example of a nude deepfake.
Some ads used highly sexualized, underwear-clad deepfake images of celebrities like Scarlett Johansson and Anne Hathaway to promote their AI products. The URLs in some ads led to websites claiming to animate real people's images to perform sex acts. Access to these "exclusive" and "advance" features often came with a price tag, with applications charging users between $20 and $80. In other instances, ad URLs redirected users to Apple's App Store, where "nudify" apps were available for download.
Meta platforms such as Instagram have marketed AI tools that let users create sexually explicit images of real people.
The Pervasive Problem Hundreds of Ads Across Networks
An analysis of Meta's ad library revealed that, at a minimum, hundreds of these ads were present across the company's social media platforms. This includes Facebook, Instagram, Threads, the Facebook Messenger application, and Meta Audience Network—a platform enabling Meta advertisers to reach users on partner mobile apps and websites.
Targeting Tactics and Persistent Presence
According to Meta's own Ad Library data, many of these ads specifically targeted men aged 18 to 65 and were active in the United States, the European Union, and the United Kingdom. A Meta spokesperson acknowledged to CBS News that the spread of AI-generated content is an ongoing issue, stating, "The people behind these exploitative apps constantly evolve their tactics to evade detection, so we're continuously working to strengthen our enforcement." Despite Meta's efforts to remove flagged ads, CBS News found that ads for "nudify" deepfake tools remained available on Instagram even after the initial takedowns.
Meta platforms such as Instagram have marketed AI tools that let users create sexually explicit images of real people.
Understanding Deepfakes The AI Behind the Deception
Deepfakes are images, audio recordings, or videos of real people that have been manipulated with artificial intelligence. These alterations misrepresent individuals as saying or doing something they did not actually say or do.
Legislative Efforts The Take It Down Act
Last month, President Trump signed the bipartisan "Take It Down Act" into law. This act mandates that websites and social media companies remove deepfake content within 48 hours of receiving notice from a victim. While the law criminalizes the act of "knowingly publishing" or threatening to publish intimate images (including AI-created deepfakes) without consent, it does not specifically target the tools used to create such content.
Platform Policies Versus Reality A Clear Disconnect
These AI tools violate platform safety and moderation rules established by both Apple and Meta. Meta's advertising standards clearly state, "ads must not contain adult nudity and sexual activity. This includes nudity, depictions of people in explicit or sexually suggestive positions, or activities that are sexually suggestive." Furthermore, Meta's "bullying and harassment" policy prohibits "derogatory sexualized photoshop or drawings" and aims to prevent users from sharing or threatening to share nonconsensual intimate imagery.
Apple's App Store guidelines also explicitly ban "content that is offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy."
Expert Weighs In Insufficient Action from Tech Giants
Alexios Mantzarlis, director of the Security, Trust, and Safety Initiative at Cornell University's tech research center, has studied the surge in AI deepfake networks marketing on social platforms for over a year. He informed CBS News that he had observed thousands more of these ads across Meta platforms, as well as on X and Telegram. Mantzarlis described X and Telegram as having a structural "lawlessness" conducive to such content. However, he believes Meta's leadership lacks the will to address the issue effectively, despite having content moderators.
"I do think that trust and safety teams at these companies care. I don't think, frankly, that they care at the very top of the company in Meta's case," Mantzarlis said. "They're clearly under-resourcing the teams that have to fight this stuff, because as sophisticated as these [deepfake] networks are … they don't have Meta money to throw at it."
App Store Complicity Apple and Google Under Scrutiny
Mantzarlis also noted that his research found "nudify" deepfake generators available on both Apple's App Store and Google's Play Store. He expressed frustration with the inability of these major platforms to enforce against such content. "The problem with apps is that they have this dual-use front where they present on the app store as a fun way to face swap, but then they are marketing on Meta as their primary purpose being nudification. So when these apps come up for review on the Apple or Google store, they don't necessarily have the wherewithal to ban them," he explained. Mantzarlis advocated for cross-industry cooperation, suggesting, "There needs to be cross-industry cooperation where if the app or the website markets itself as a tool for nudification on any place on the web, then everyone else can be like, 'All right, I don't care what you present yourself as on my platform, you're gone.'" CBS News reached out to Apple and Google for comment, but neither had responded by the time of writing.
Protecting Users The Unseen Dangers of Deepfake Tools
The promotion of such apps by major tech companies raises serious concerns about user consent and online safety for minors. A CBS News analysis of one "nudify" website promoted on Instagram revealed no age verification prompt before allowing users to upload photos for deepfake generation. This issue is widespread. In December, CBS News' 60 Minutes reported a similar lack of age verification on a popular site generating fake nude photos using AI. Despite a warning that users must be 18 or older, 60 Minutes gained immediate access after clicking "accept," with no further verification.
Data indicates a high level of interaction with deepfake content among underage teenagers. A March 2025 study by the children's protection nonprofit Thorn found that 41% of teens had heard of "deepfake nudes," and 10% knew someone who had been victimized by such imagery.