How AI Deepfakes Threaten Corporate Recruitment
The New Face of Recruitment Fraud
Artificial intelligence is rapidly changing the world, but not always for the better. A particularly troubling development is the rise of deepfake technology, which scammers are now using to impersonate qualified job candidates during video interviews. This isn't just about cheating on a job application; it's a calculated strategy to land roles with the specific intent of infiltrating and compromising corporate security systems from the inside.
A Problem of Alarming Scale
The threat is not just theoretical; it's growing at an alarming rate. Research firm Gartner has issued a startling prediction: by 2028, a staggering one in four job candidates worldwide will be entirely fake. This isn't a future problem—it's happening now. A recent study highlighted the immediate challenge, revealing that 17% of hiring managers have already encountered candidates using deepfake technology to alter their appearance or voice during video interviews.
From Stolen Identities to AI Personas
While using deception to get a job is not a new phenomenon, AI has made the process dangerously sophisticated and scalable. In the past, authorities have prosecuted individuals for helping scammers use stolen identities to secure high-paying jobs. For instance, the Justice Department charged a man in Tennessee for aiding North Korean operatives in obtaining remote IT positions, with the salaries being funneled back to support the nation's weapons program.
Today, AI makes it incredibly simple for bad actors to create entirely new digital personas. They can craft a version of themselves that looks different, sounds different, and appears to have the perfect qualifications, making detection more difficult than ever.
An HR Insider's Perspective on Deepfake Threats
So, what can the people on the front lines of hiring do about this evolving threat? We looked for insights from Pam Genske, the Vice President of Human Resources at Capitol Broadcasting Company, who confirmed that these concerns are becoming a frequent topic of discussion among industry professionals.
"They are coming up more and more frequently," Genske said. She pointed out a vulnerability in modern hiring practices, such as asynchronous interviews where candidates record video answers without a live interviewer. "Some companies are using an exclusive, they’ll send an email link to people and say, ‘Hey I want you to answer these questions on a video and nobody’s on the other side.’... I’m not a big fan of that because it doesn’t set up a good candidate experience, but it saves companies that are doing volume hiring a lot of time."
When asked if she knew of companies that accidentally hired a fraudulent candidate, Genske's answer was a direct "Yes," especially for businesses that hire a fully remote workforce. "If you don’t have some checks and balances inside of things, those things can happen pretty easily. You’ve really got to have processes and procedures in place to keep those things from happening."
The Double-Edged Sword of AI in Hiring
Ironically, some of the tools that open the door to these scams are also powered by AI. Companies trying to manage high-volume hiring—where they need to screen and onboard many people quickly—often rely on AI-driven platforms like Paradox AI or HireVue to conduct initial video screenings. This efficiency can come at the cost of security, as it may reduce the human oversight needed to spot a sophisticated deepfake.
This highlights a central paradox: AI is being deployed as both a solution to streamline recruitment and the cause of a new, complex security problem.
How Can Businesses Protect Themselves
Naturally, companies are exploring using AI to fight back against deepfake scammers. However, according to Genske, the defensive technology isn't quite there yet and is not reliable enough to be a silver bullet. For now, the best defense seems to be a blend of technological awareness and robust, traditional human resources practices. Implementing multiple checks, verifying identities, and ensuring some form of direct, live interaction remains critical in an age where you can't always believe what you see.