Could you Spot a Digital Twin at Work? Get Ready for Hyper-Personalized Attacks
The world is worried about deepfakes. Research conducted in the U.S. and Australia finds that nearly three-quarters of respondents feel negatively about them, associating the AI-generated phenomenon with fraud and misinformation. But in the workplace, we’re more likely to let our guard down. That’s bad news for businesses as the prospect of LLM-trained malicious digital twins draws nearer. The potential for phishing, business email compromise (BEC), digital extortion and sensitive data theft on a whole new scale is just around the corner. People, processes and new technology will be key to mitigating the new threat of malicious digital twins.
A New Era
Surveys show that two-thirds of respondents are worried about falling victim to a deepfake-related scam. But when employees pick up the phone at work, log on to a video conferencing call or read an email, most aren’t primed to expect spoofing, phishing, or other fraudulent campaigns.
Nefarious actors have an opportunity to use leaked, breached or scraped personal information (PII) from a victim to train an LLM to mimic the knowledge and personality of an employee. It can then be combined with deepfake video and audio to create a malicious but highly convincing digital twin.
There are several scenarios in which digital twins could thrive. In 2022, the FBI warned of virtual meeting rooms being hijacked by deepfake personas mimicking the CEO to carry out BEC attacks. An LLM-powered deepfake trained to talk and act like that CEO would be far more effective in tricking attendees into making a large wire transfer. The same scam could be pulled by impersonating an employee at a key supplier who wants to be paid.
Can You Trust Your Co-Workers?
In another scenario, threat actors may use digital twins to impersonate would-be employees to get a job and insider access. The FBI has already issued an advisory warning of fake North Korean IT workers who managed to bypass vetting processes to gain employment at U.S. firms. With this kind of privileged access, a threat actor could pose a major threat to sensitive business IPs and highly regulated customer data. They may seek to extort the victim company, or merely stay hidden, harvesting information while earning paychecks, as several recently indicted actors did.
Digital twins would elevate the threat yet further, making it easier for threat actors to get hired at a U.S. firm. And as with all of these threats, even though hyper-personalized, they could be scaled up with minimal effort thanks to cloud-powered AI.
Beyond the Enterprise
Similar tactics, techniques and procedures could be used by threat actors to bypass Know Your Customer (KYC) checks to access customers’ bank accounts—leveraging a combination of unintentionally exposed biometrics, leaked and breached PII and deepfake technology. One report claims deepfakes were used to attempt KYC checks once every five minutes last year.
A digital twin of a C-suite executive could also be spun up to run large-scale scams on social media platforms, trashing the company brand and damaging customer loyalty. We’ve seen plenty of these campaigns over the years, but the addition of realistic-looking and talking personas could make these scams dangerously believable.
Build It or Jailbreak It
How likely is this all to come to pass? These scenarios provide a glimpse into what’s possible with malicious digital twins. Over the past few months, we’ve seen an astonishing rate of development in the deepfake space. Advanced capabilities are increasingly commonplace on the cybercrime underground, lowering the barrier to entry for a new swathe of financially motivated threat actors.
On the LLM side, to date, we haven’t seen a great deal of criminal models—presumably because training takes significant time and resources. But jailbreak-as-a-service offerings are widespread, allowing threat actors to circumvent the safety guardrails of legitimate chatbots to achieve their ends. It’s also a concern that many existing LLMs and other components of the supply chain are riddled with vulnerabilities and often misconfigured to be publicly accessible with no authentication required.
At the same time, the pool of potential victims keeps growing. Individuals who post photos and videos online may want to limit their public profiles, lest they become a future target for a malicious twinning attempt.
Maximum Awareness, Zero-Trust
So how do corporate security teams find a way through all of this? It comes back to people, processes and technology.
Take people first. Many people surveyed say that they can identify deepfakes — 57% for images, 48% for videos and 45% for audio. But as the technology gets better, this will get harder. Therefore, user awareness programs must be updated not only to teach employees how to better spot deepfakes but also about how new social engineering attacks might work. We’ll need to encourage the same kind of caution when interacting with ‘colleagues’ on phone and video calls as when receiving emails.
Next, consider the process. New rules may need to be devised to ensure large wire transfers don’t get signed off unless personally double-checked with the original requester. The same goes for password reset requests and other security-sensitive activities. A balance must be struck between security and productivity, in line with corporate risk appetite. Hiring processes will require more authentication and verification checks, which should be an easier sell to the business given that decisions are less time-sensitive.
It’s ultimately about instilling a zero-trust ethos across the workforce, backed up by technology. This is where deepfake detection tools come in. Although still at an early stage, there’s plenty of promise here. Some tools are already capable of scanning live video for AI face-swapping content and alerting users to scams in real-time.
As the potential for malicious digital twins increases, business leaders need to prepare employees for social engineering attempts, fraud and misinformation. Security risk is business risk, so maximizing awareness of these cyber threats among employees will be essential to bolstering defenses and staying ahead of cybercriminals.