People will believe what they want to believe. Confirmation bias is a tendency where once we have formed an opinion, we cherry-pick information that confirms our prejudices. Situations are not perceived objectively, leading to misjudgments and false information.
Deepfake plays into confirmation bias like a duck to water.
Deepfake poses a great threat to cybersecurity. But before we dig into that, first, let’s understand what is deepfake.
What is Deepfake?
Deepfake comes from “deep learning” and “fake”. It’s an AI-based technology used to create fake videos and audio that look and sound real.
Although academic interest related to deepfakes dates back to 1997, deepfake came into the public mainstream in 2017. It started with a group of Reddit users that employed the use of AI to swap faces of celebrities with other movie characters.
What makes it tricky is just about anybody with a computer and an internet connection can create deepfake media. This is done by using a machine learning system called generative adversarial networks (GAN). It flags the flaws in the forgery until they become undetectable. The entire process of creation can be automated at scale without breaking a sweat.
The ease and accessibility of deepfake have opened a new realm of social engineering attacks for which your current cybersecurity system may not be prepared.
Dangers of Deepfake for Organizations
Cybercriminals love deepfake because they don’t have to go through the grind of targeting your systems. Everything is happening on regular information channels like social media and emails. In short, one doesn’t need to have ‘special’ hacking skills to deploy cybersecurity attacks. And, therein lies the danger.
Propaganda Impacts Financial Health
It took one fake tweet about explosions in the White House that had injured former US president Barack Obama to wipe out more than US$130 billion in stock value in a matter of minutes.
Hackers can make your business financially vulnerable without having to access your balance sheet. Spreading misinformation in the market can increase or decrease share prices, depending on the agenda of the criminals.
ID Theft 2.0
As if the dark web hadn’t done enough for ID thieves, deepfake is now taking ID theft to the next level. Deepfake with the help of social media makes impersonating anyone easy, really easy.
Say you’re an IT manager. Hackers will scrutinize your social media handle for audio and video bits. A deepfake media will be created to trick your subordinates into giving access to sensitive databases. Result: data breach at a catastrophic scale.
A New Form of Ransomware
Attackers can create extremely damaging video and audio clips. With the threat of putting it all online, hackers can extort money, data, or both. It’s no surprise deepfake ransomware is among the most feared cyberattack vectors.
Protect Data Against Deepfake
While combating deepfake technology is challenging, it is possible to keep your data secured. You need to look at two things: human and technology.
Deepfake relies on the same hacking principle: human error – more specifically error of judgment. The human aspect involves training employees to understand the difference between real and fake while protecting their identities on the internet.
The technology aspect means equipping yourself with the best cybersecurity solutions. Although there are automated tools to spot deepfake media, unfortunately, the technology is not scalable. You need a solution that provides comprehensive data protection.
Say hello to Spanning Backup! Recover lost data and ensure business continuity with ease. Spanning comes with a robust 256-bit AES object-level encryption, intrusion detection and compartmentalized access — discouraging deepfake criminals to gain access to your data.
*** This is a Security Bloggers Network syndicated blog from Spanning authored by Shyam Oza. Read the original post at: https://spanning.com/blog/deepfake-ai-endangering-your-cybersecurity/