As the world of AI and deepfake technology grows more complex, the risk that deepfakes pose to firms and individuals grows increasingly potent. This growing sophistication of the latest software and algorithms has allowed malicious hackers, scammers and cyber criminals who work tirelessly behind the scenes to stay one step ahead of the authorities, making the threat of attacks increasingly difficult to both prepare for and defend against.

Most readers probably believe they’re more or less familiar with the nature of traditional cyber attacks that involve system hacking, viruses and ransomware. However, the realm of cyber crime took a vast leap forward in 2019 when the CEO of a UK-based energy firm fell victim to a scam built upon a phone call using deepfake audio technology.

Believing he was speaking to his boss, the CEO victim sent almost $250k as a result of being told to do so by a AI-generated deepfake audio file. In the aftermath, some cybersecurity experts have been left wondering whether deepfake audio technology represents the next major security concern, and the wider world is left scrambling for ways to spot this looming threat.

Voice Cloning and AI Audio: A New Frontier For Cybercrime

The audio deepfake scam is, without a doubt, one of the more bizarre applications of deepfake technology. However, as we’ve seen, it’s one which can clearly be applied successfully ­– so successfully and convincingly, in fact, that the CEO who fell victim to the cyberattack stated on the record that he recognized his boss’s voice by its ‘slight German accent’ and ‘melodic lilt.’ Furthermore, by all accounts, the cybercriminals’ tech is becoming more difficult to detect by the month.

Sophisticated technology aside, the process behind the construction of audio deepfakes is a surprisingly simple one. Hackers have tweaked machine learning technology (Read more...)