Metas Role In The Rise Of AI Financial Scams
Deepfakes, which are hyper-realistic fake digital media created with Artificial Intelligence (AI), are no longer science fiction. These AI-generated images, videos, and audio clips are being used for malicious purposes, including non-consensual pornography, disinformation campaigns, and sophisticated financial fraud.
One striking example involved a French woman who was conned out of $800,000 (€685,000) by a scammer impersonating actor Brad Pitt. The fraudster used deepfaked photos of the actor in a hospital bed, convincingly claiming he needed money for medical bills that were inaccessible due to his divorce. This case highlights how criminals exploit trust, emotion, and celebrity fascination using advanced AI tools.
How Scammers Create Malicious Deepfakes
Creating deepfakes, which can range from entirely synthetic videos to cloned voices layered over real footage, has become alarmingly easy. Malicious content isn't just made with hidden dark web tools; it's often created with technology from legitimate providers.
For instance, Stability AI’s popular image generator, Stable Diffusion, has been misused to create child sexual abuse material, resulting in criminal charges. A 2024 UN report also found that financial fraudsters combined open-source tools like Google Face Mesh with other software to generate convincing deepfake scam videos. Open-source software is particularly appealing to criminals because it is free, accessible, and can produce deepfakes that are difficult to trace.
Meta's Central Role in Spreading Fraud
Digital platforms like those owned by Meta act as the primary bridge between scammers and their victims. Instagram, Facebook, and WhatsApp are frequently the platforms of choice for conducting these deepfake scams.
Scams reported on one Meta platform often just reappear on another, with fraudsters using WhatsApp to continue the conversation and finalize the fraudulent scheme. Data from UK bank TSB for 2021–2022 showed that a staggering 80 percent of fraud cases originated on Meta-owned platforms.
Recent high-profile examples are numerous. A deepfake video of Dutch prime minister Dick Schoof was used in a sponsored Facebook ad for a fake investment scheme, reaching 250,000 views. Despite reports to Meta, the ad remained active. Similarly, a deepfake of former fund manager Anthony Bolton was used to lure investors into a WhatsApp group, with one Facebook post of the video garnering over 500,000 views in June 2025. The problem became so severe that Danish TV presenters, whose likenesses were stolen for thousands of fake ads, reported Meta to the police in 2024.
Profit Over People The Core of the Problem
At the heart of the issue lies Meta's dysfunctional moderation system, which appears to prioritize ad revenue over public safety. A 2025 Wall Street Journal investigation revealed that Facebook and Instagram staff were instructed to allow up to 32 fraud “strikes” from an advertiser before taking action. This policy deliberately deprioritizes enforcement against scams to avoid losing ad revenue. Considering that advertising generated $160.6 billion of Meta's $164.5 billion revenue in 2024, the financial incentive is clear.
Why Current EU Regulations Are Falling Short
Existing rules in the European Union are not making a significant difference. The EU AI Act mandates that AI-generated content must be clearly labeled, with steep fines for non-compliance in countries like Spain. However, fraudsters simply use non-compliant tools to bypass these rules, making platform-level enforcement the only viable solution.
The EU’s Digital Services Act (DSA) requires large platforms like Facebook to have systems for reporting illegal content and to maintain a public ad repository. Yet, Meta has been criticized for poor DSA compliance, including inadequate moderation staffing and a convoluted reporting process.
While some countries like France have criminalized sharing non-consensual deepfakes, this alone doesn't prevent the harm, as scammers are rarely caught.
In the UK, a recent parliamentary report on its Online Safety Act reached a similar conclusion: platforms are either unable or unwilling to stop harmful content like fraudulent ads.
A Call for Stronger Accountability
Mounting evidence suggests that Meta is not just failing to stop fraud but is actively monetizing it by knowingly allowing scam ads on its platforms. This has led to calls to hold the company criminally liable for aiding and abetting organized crime.
The most effective way to force platforms like Meta to act responsibly may be to criminalize and prosecute the intentional hosting of illegal content, such as these deepfake financial scams.