
Why the Fortune 500 Keeps Getting Duped by Deepfake Job Applicants–And How to Stop It
The FBI is raising the alarm over an escalation in North Korean nation-state threat actors leveraging generative AI, laptop farms, and deepfakes to successfully infiltrate large companies through remote job applications, despite widely available solutions that can thwart such attacks.
We’ve discussed this threat before. But according to a recent Cyberscoop.com report, it’s getting worse. Thousands of North Korean operatives have secured jobs at more than a hundred Fortune 500 companies.
Using stolen or synthetic identity information and generative AI, these impostors produce polished LinkedIn profiles and identity credentials. Deepfake technology is used for video calls, sometimes unconvincingly, though not often enough. Laptop farms in the US enable the infiltrators to spoof local IPs. Moreover, the imposters usually excel at the actual job, supported by an entire pod of 10 to 20 operatives performing the work for the fraudulent identity behind the scenes.
Speaking on a panel at the RSA Conference 2025, FBI Special Agent Elizabeth Pelker reported that when employers find out these imposters may be agents, they often hesitate to terminate them. “I think more often than not, I get the comment, “Oh, but Johnny is our best performer. Do we actually need to fire them?’” There are plenty of reasons to say an easy yes.
Fake Workers, Real Money
According to WIRED, each impostor and pod can send as much as $3 million a year back to Pyongyang just in wages. In Cyberscoop’s estimation, that could be as much as $100 million funneled annually to the North Korean regime.
However, these operatives also use their corporate footholds to exfiltrate data and intellectual property, plant malware, and occasionally threaten executives with blackmail. They can also act on a larger scale, disrupting critical services or infrastructure. It’s no wonder the FBI is offering up to $5 million for information that leads to the disruption of these operations.
It’s alarmingly easy to generate the identities used in these scams. Once the purview of skilled operatives, a steady flow of corporate data breaches coupled with the rise of “crime-as-a-service” outfits means even non-experts can leverage realistic personas complete with fabricated LinkedIn profiles, convincing virtual backgrounds, and digitally manipulated identity documents. And at sites like personnotexist[.]org, even people with no image manipulation experience can easily generate deepfake personas for use in video.
For businesses increasingly reliant on remote teams, the sophistication of these outfits represents a significant threat. By some industry estimates, 40% of all cybercrime incidents in 2024 involved deepfake infiltration. At an average cost of $4.99 million per incident caused by malicious “insiders” like fraudulent remote workers, it can add up. For US-based companies, the average cost of a data breach tops $9 million. As it stands, 34% of companies in North America report suffering a data breach that cost them between $1 million and $20 million in the past three years. As many as one in three breaches now involve insiders.
Once breached, organizations face the loss or theft of data, compromised networks, ransomware, and blackmail. That’s before lawsuits or regulatory fines, reputational damage, and the fallout from lost customer trust, shattered shareholder confidence, and derailed strategic initiatives.
‘How Ugly Is Kim Jong Un?’ Why Most Security Measures Fail
Identifying fake job candidates requires innovation and skepticism. Red flags include discrepancies between identity information and the candidates themselves. According to Cyberscoop, examples can consist of a person with a complicated Polish name who, in a Zoom call, turns out to be a military-age male Asian who can’t pronounce it.
When deepfakes are in use, telltale signs can be sudden jerkiness or a disconnect between audio and video. It’s also not uncommon for candidates to be fed answers to questions, creating latency as they read onscreen prompts. As a counter measure, some hiring managers ask candidates to hold up identity credentials to match face with identity. But the same AI tools behind the deepfakes can also be used to create counterfeit credentials, of course.
According to researchers, asking a candidate to wave their hand in front of their face can reveal deepfake inconsistencies. Another tactic: asking unexpected questions like, “How ugly is Kim Jong Un?” Perhaps unsurprisingly, it can be enough of a curve ball to instantly trigger operatives to disconnect from the call abruptly. The technique is effective because North Korean agents are prohibited from saying anything negative about their leader. When asked to criticize him, they typically end the interview immediately rather than risk punishment for appearing disloyal. Of course, that might not always work, nor is it ever enough.
Today’s typical security protocols—basic ID verification, standard biometric checks, and interview procedures—are no match for increasingly sophisticated deepfake schemes. Attacks now employ presentation attacks for printed and emailed identity credentials, voice cloning, face swapping, and manipulation of captured video of a legitimate user or candidate.
They can also include injection attacks to manipulate data streams between the camera or scanner and authentication systems. Fraudsters with access to an open device, for example, can inject a passing fingerprint or face ID into the authentication process—or text-to-speech tools to manipulate cloned voice samples—bypassing security measures and gaining unauthorized access to corporate networks and services.
What It Really Takes to Defeat Deepfake Identities
As I mentioned, the strongest countermeasure against deepfakes and workforce identity fraud in all its forms is already widely available: advanced biometric authentication, securely linked to verified identities. 1Kosmos is a case in point. Our identity proofing solution leads potential employees through a mobile-first enrollment process in which they scan driver’s licenses, social security numbers, national identity documents, passports, or other government-issued credentials.
Automated workflows then verify the data, including any associated pictures and RFID-chip data, across more than 2,500 different identity documents from over 150 countries with over 99% accuracy. These verified individuals are linked to a facial biometric that is dynamically compared to the photo on valid credentials, and the bound identity is then encrypted with the user’s private key and stored in our private and permissioned blockchain.
Passive liveness detection spots subtle digital markers indicative of manipulated images. Detection methods prompt users to perform specific, randomized actions—confirming genuine human presence at enrollment and, if desired, as a login measure for video interviews or meetings. Fraudulent identities are flagged during the time of capture. Our solutions are also built on the only platform certified to NIST 800-63-3, UK DIATF, FIDO2, and iBeta ISO/IEC 30107 standards with an SDK and standard APIs to avoid security exploits and prevent vendor lock.
Facing the Future: Stopping Deepfakes Before They Start
Unchecked, the epidemic of fraudulent job candidates threatens hiring integrity, cybersecurity infrastructure, and corporate reputations worldwide—and it will only get worse. According to Gartner, 1 in every 4 job applicants will be fake by 2028. When you consider other “enhancements” made by otherwise legitimate job candidates, the need for objective, bullet-proof identity proofing grows more critical.
Along with fostering a culture of cybersecurity vigilance throughout the organization (looking at you, HR), implementing widely available biometric technologies that adhere to exacting identity verification standards offers the best—well, only—surefire way to defeat ever-evolving threats from workforce fraud, North Korean and otherwise. Which means the next time you ask a job candidate, “How fat is Kim Jong Un?” it’ll just be for the fun of it.
To learn more about how 1Kosmos can protect your organization against worker fraud with the online NIST, DIATF, FIDO2, and iBeta certified workforce verification and authentication solutions on the market, click here.
The post Why the Fortune 500 Keeps Getting Duped by Deepfake Job Applicants–And How to Stop It appeared first on 1Kosmos.
*** This is a Security Bloggers Network syndicated blog from Identity & Authentication Blog authored by Mike Engle. Read the original post at: https://www.1kosmos.com/identity-management/why-the-fortune-500-keeps-getting-duped-by-deepfake-job-applicants-and-how-to-stop-it/