Deepfakes Pose New Security Challenges

Expect to hear a lot about deepfakes in 2020. It’s not that manipulating images is anything new, but with technology advantages and the increasing use of biometrics as an authentication tool, deepfakes will impact cybersecurity efforts.

For example, cybercriminals are now perfecting deepfakes to impersonate people to steal money and anything that might be valuable. The technology has been improved to reach a higher level, where it becomes difficult to tell the difference between a fraud and a friend.

According to McAfee researchers, deepfakes will make it more difficult to achieve true facial recognition, just as facial recognition software is increasingly used to unlock smartphones and as airport identification alternatives, to name a few use cases.

“As technologies are adopted over the coming years, a very viable threat vector will emerge, and we predict adversaries will begin to generate deepfakes to bypass facial recognition,” Steve Povolny, head of McAfee Advanced Threat Research, wrote in a McAfee blog post. This is because “enhanced computers can rapidly process numerous biometrics of a face, and mathematically build or classify human features, among many other applications.”

To do this, scammers turn to an analytics technology known as generative adversarial networks (GANs) to create fake but extremely realistic, images, text and video, making it more and more difficult to tell the real thing from a deepfake. This will make it more difficult for those charged with security to tell the difference between legitimate and fake.

Facial Recognition Already Has Flaws

Despite its growing adoption, facial recognition comes saddled with all types of security problems. The Washington Post reported on a recently released federal study showing these systems show biases against people of color and between genders and age groups. “The National Institute of Standards and Technology, the federal laboratory known as NIST that develops standards for new technology, found ‘empirical evidence’ that most of the facial-recognition algorithms exhibit ‘demographic differentials’ that can worsen their accuracy based on a person’s age, gender or race,” the article reported.

Now add deepfakes to the problems that already exist with facial recognition, and any type of criminal will be able to manipulate the analytics to bypass the law. Deepfakes will make law enforcement more difficult, from police on the street to nation-state election fraud. Those tasked with security will be asked to tell the difference between the real and the fake;  deepfakes will make it even more difficult.

So Easy, Even a Novice Can Do It

Some computer savviness will be necessary to create deepfakes, but this is going to be a tool available to novices, one that could raise the stakes for insider threats as well as outside cybercrime.

As an experiment for an Ars Technica article, Timothy Lee did a deep dive into how deepfake software works. It was time-consuming—it took two weeks for him to create a video that replaced Mark Zuckerberg with a character from Star Trek—and it required a lot of computer power, but it wasn’t expensive (a little more than $500). And he developed skills that will make him more proficient if he makes another video.

Now consider if an employee or a contractor wanted to deploy their own deepfake video as a malicious attack against the company or a fellow co-worker.

“Deepfake video or text can be weaponized to enhance information warfare. Freely available video of public comments can be used to train a machine-learning model that can develop a deepfake video depicting one person’s words coming out of another’s mouth,” Steve Grobman, McAfee’s Chief Technology Officer wrote. “Attackers can now create automated, targeted content to increase the probability that an individual or groups fall for a campaign. In this way, AI and machine learning can be combined to create massive chaos.”

Close But Not Quite There

At McAfee’s MPower conference in October, researchers discussed their pre-emptive strike against AI-generated deepfakes and image manipulation. While deepfake-related attacks are imminent, they aren’t yet happening, at least not on a high-scale level. Right now we’re seeing what could happen mostly with examples and experiments. So, the researchers said, this is one cybersecurity attack that security teams are addressing before the fact rather than in reaction to, and hopefully the tools will be in place sooner rather than later.

But the attackers and the technology isn’t quite there yet, either. “While an attacker can use deepfake techniques to convincingly emulate the likeness of an individual, it still difficult to digitally impersonate one’s voice without fairly obvious imperfections,” said Robert Capps, vice president of market innovation for NuData Security, in an email comment.

“Deepfake audio or video cannot currently be rendered in real-time without an attacker having a large volume of computing resources and a lot of high-quality audio and video source material to train computer machine learning algorithms,” Capps continued. “While deepfakes can be convincing to other humans, they are unable to pass physical or passive biometric verification, so coupling strong liveness detection, along with the collection of passive and physical biometric signals to verify a user’s identity, largely mitigate the current risks presented in banking transactions.”

Security challenges with deepfakes are out there, but hopefully, security professionals will have the tools in place to address them before serious damage is done.

Sue Poremba

Avatar photo

Sue Poremba

Sue Poremba is freelance writer based in central Pennsylvania. She's been writing about cybersecurity and technology trends since 2008.

sue-poremba has 271 posts and counting.See all posts by sue-poremba