SBN

Gearing Up Security for the Deepfake Era

Gearing Up Security for the Deepfake Era

Back in 2019 – which now feels like a hundred years ago – Moody’s published an official research announcement which highlighted the threat that deepfakes posed to organizations and individuals. Though the report was largely ignored at the time, many of the predictions it made are starting to seem eerily prescient.

Specifically, the report pointed to the fact that deepfakes can be used for far more than just faking videos of politicians. It’s also possible to use them to damage a company’s credibility, and therefore their financial health, to say nothing of using fake videos to fake “smart” physical protection systems.

This is not a new concern, of course. Deepfakes appeared on our predictions for cybersecurity in 2021, and in just the last few months we’ve seen that detecting them is putting a significant burden on the healthcare industry. But, despite this, many security engineers are still unprepared for a future in which deepfakes are just another hacker tool. In this article, we’ll look at why that is, and what you can do to protect your organization.

Deepfakes Pro

Let’s tackle one myth before we go any further – deepfakes are a real threat to consumer and enterprise cybersecurity. Unfortunately, up until now, the technology has mainly been used for “fun.” The result is that even experienced security analysts have often overlooked the real threat hidden within.

Another reason for this under-estimation of the danger of deepfake videos is that network engineers are so used to looking for complex, network-focused threats that they can overlook the most obvious ways of compromising systems. The majority of cyber attacks don’t start with exotic self-encrypting malware; they start with a successful phishing attempt. And deepfakes take the ability to phish to the next level.

This danger was, in fact, foreseen in the Moody’s report I’ve already mentioned above. “Imagine a fake but realistic-looking video of a CEO making racist comments or bragging about corrupt acts,” wrote Leroy Terrelonge, AVP-Cyber Risk Analyst at Moody’s. “Advances in artificial intelligence will make it easier to create such deepfakes and harder to debunk them. Disinformation attacks could be a severe credit negative for victim companies.”

In other words, a corporation’s real world valuable assets might find themselves at risk due to an attack of digital pixels. It could happen. Imagine a deepfake attack left unchallenged (or poorly challenged) that could result in brand degradation, a sales dive, falling stock prices, and more.

It goes without saying that this kind of threat – using a deepfake to undermine market confidence, rather than hack into a system – is not the kind of threat that security engineers are used to dealing with. That’s why, in order to combat this danger, organizations must take a more holistic view of security, and recognize just how broad the threat of deepfakes is.

Full-spectrum Threats

The range of what can be achieved with deepfake videos is, in fact, only beginning to be realized by academics and analysts. In a report released just last year, the Brookings Institution looked to survey the range of risks that are associated with deepfakes. Their conclusions were pretty apocalyptic. Deepfakes could, they wrote, be used in “distorting democratic discourse; manipulating elections; eroding trust in institutions; weakening journalism; exacerbating social divisions; undermining public safety; and inflicting hard-to-repair damage on the reputation of prominent individuals, including elected officials and candidates for office.”

Looked at from a corporate perspective, the range of risks is similarly broad. At the most basic level, deepfakes can be used to trick the facial recognition software that many consumers and businesses rely on for access control. This makes deepfakes among the top threats to cloud security. When one considers just how many smart home gadgets are in the average home, and how many of these homes rely on facial-recognition security cameras, it’s also clear that we must number deepfakes among the top threats to consumer data in general.

At the broadest level, the impact of deepfakes may go even further. Unless we find an effective way of combating these videos, they threaten to undermine the trust that underpins communications of all kinds – not just that between politicians and the electorate, but between CEOs and shareholders, the general public and brands. Without a way to effectively detect deepfakes, and limit their impact, many corporations face an existential threat.

Preparing for the Future

At the moment, however, the tools available for detecting deepfakes are fairly limited. Though experts have argued that these tools should be a focus of research efforts, and despite some promising advances in using AI to reverse-engineer the videos, most deepfake videos remain dangerously convincing.

This means that, for most organizations, preparing for the advent of the “deepfake era” will involve integrating cybersecurity processes with broader management and PR teams. In the parlance of our times, this may eventually mean the creation of DevSecPR teams, who can:

  • Train CEOs and other executives on the dangers posed by deepfakes and provide them with strategies for responding to them
  • Identify when and where your organization is mentioned in the media, and plan mitigation of damaging incidents in real time
  • Deploy what tools are available for the detection of deepfakes in order to ensure that the information contained in them can be denied and managed
  • Develop incident response plans for managing deepfake incidents which identify the key stakeholders and details of how and when information about the incident will be released

It’s only by working in this holistic way, in fact, that organizations can hope to mitigate the growing threat of deepfakes, because it’s quickly becoming apparent that this risk extends far beyond the simple phishing attack.

The Bottom Line

It may be, of course, that some of these predictions do not come to pass and that we do not enter an era in which deepfakes are common. Perhaps a fool-proof way of detecting these videos will emerge and render their threat obsolete. Until then, however, organizations need to take this threat seriously – not just in ensuring they improve data security, but also in managing the social and financial fallout from deepfake videos.

Note: This blog article was written by a guest contributor for the purpose of offering a wider variety of content for our readers. The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of GlobalSign.

Share this Post

*** This is a Security Bloggers Network syndicated blog from Blog Feed authored by Blog Feed. Read the original post at: https://www.globalsign.com/en/blog/gearing-security-deepfake-era