Back to all posts

Clearview AI Develops Tool to Counter Deepfake Images

2025-09-23Rebecca Heilweil3 minutes read
Artificial Intelligence
Cybersecurity
Deepfake

Clearview AI, the facial recognition company known for scraping the internet for images to build its vast database, is now tackling a new challenge: AI-generated faces.

In a recent statement to FedScoop, co-CEO Hal Lambert confirmed that Clearview AI is building a new tool designed to detect manipulated images for its customers. This client base notably includes several federal law enforcement agencies. Lambert was appointed co-CEO earlier this year following a board decision to replace the company's original top executive.

The company has amassed billions of images from public sources online, including social media, to create a powerful facial recognition database. This tool has been utilized by a diverse range of clients, such as Immigration and Customs Enforcement, the government of Ukraine, and various police departments seeking to identify victims of child pornography. Clearview AI often points to its high facial recognition efficacy scores from the National Institute of Standards and Technology to validate its technology.

The Rise of Deepfakes A New Challenge

The proliferation of deepfakes, or images edited with artificial intelligence, presents a significant complication for technologies like Clearview AI's. While Lambert told FedScoop that deepfakes haven't been a major issue for the company yet, they are proactively developing a tool to tag potentially AI-generated images. The goal is to have it ready for customers by the end of the year, although further details were not provided.

Since generative AI tools from companies like OpenAI and Google became widely available, deepfakes have spread rapidly. This poses a threat to any company trying to train accurate facial recognition models or build a reliable database of identities from online images.

Expert Insights on Synthetic Media Risks

One major hurdle is that AI-generated faces can be mixed with real ones, which can cause a system to learn incorrect statistical patterns, reducing its accuracy on real people. Siwei Lyu, a computer science professor at the University of Buffalo, notes, “AI-generated faces often do not correspond to real humans, but facial recognition systems may treat them as unique identities. This leads to ‘ghost identities’ in the database.”

Lyu added that AI models can also produce faces with inherent biases, over- or under-representing certain ethnicities or features, which a facial recognition system can then adopt. He also cautioned that the performance of current deepfake detection technologies can be inconsistent.

Emmanuelle Saliba, chief investigative officer at GetReal Security, highlighted the growing sophistication of these fakes. “While some tools still generate images with some visual inconsistencies, others are closer to hyperrealism and it is nearly impossible to tell apart from an image of a real human being,” she said. “We are seeing a wave of convincing fabricated images hit our feeds in almost every breaking news event.”

Clearviews Response and Ongoing Controversies

Despite the new tool's development, Clearview AI continues to face criticism from privacy and civil liberties advocates, including the American Civil Liberties Union and the Electronic Privacy Information Center. Lawmakers have also argued that the company's methods could endanger public privacy and have called on federal agencies to cease using the technology.

In response to these concerns, Lambert asserted that the tool cannot be connected to live surveillance feeds. “People are always worried that this is going to turn into some sort of surveillance state, and it’s just not that,” he stated. “None of this is live. There’s no live feeds. This is all simply data that was out there, available, and can be used as public data.”

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.