Deepfakes is an emerging threat vector

Cybersecurity faces a new emerging form of threats: deepfakes where images, video and audio can be digitally doctored to scam people and organisations. Remember the video of a seemingly impaired Senator Nancy Pelosi that went viral a few months ago on social media? It was a deepfake video, probably created to discredit her.

Why should the cybersecurity industry be concerned with deepfakes? Thus far, cybersecurity  practitioners have focused on the unauthorised access of data. But the motivation behind and the anatomy of attacks have changed. Cyber attackers are adding to their repertoire of stealing data and/or holding it ransom. They are now modifying data in a big way with the same goal: steal information and money, damage reputations, cripple governments with false information. The mechanism to do all this has changed with deepfakes. 

The Dr Hugh Thompson Show, the closing programme at RSA Conferenence 2019 Asia Pacific & Japan, illustrated this with AI expert Dr Saurabh Shintre, a senior principal researcher with Symantec. He demonstrated how easy it was to create a deepfake video. Using AI software, he created a video which swapped the face and voice of Amazon CEO Jeff Bezos with that of Dr Hugh Thompson, conference chairman of the RSA Conference and the chief technology officer of Symantec.

Dr Shintre had earlier taken five hours to do this, using AI programs that are available for free online, in multiple languages and which also included user instruction manuals. As the technology improves, he said, the images will become more realistic. 

This is a scary thought. I can be scammed if someone digitally altered my face, replacing it with that of a criminal to link me to a crime. It would not be easy to tell the difference. Only experts experienced in picking out the truth among the lies are best equipped to do this. 

Show guest Dr Vrizlynn Thing, a cybersecurity expert with Singapore firm ST Engineering, spoke of her experience with a group of police investigators. She showed them a mix of real and deepfake videos, asking them to tell one from the other. They were able to pick out the real ones, 80 percent of the time by looking for the tell-tale signs like bad lip synching where the words didn’t quite match mouth movements. But these are experienced police officers, skilled in interrogation and able to pick out anomalies to discern fact from fiction.

To start to dismantle deepfakes, Dr Thing suggested that one has to be aware of information sources and to consider the deepfake’s relevance and importance to organisations and executives.

I would suggest that cybersecurity practitioners should also think like the hacker: what reaction does the deepfake want to evoke, what are the possible consequences and how did it surface are some factors to consider. This might then lead to discovering the source of the false creations.

Deepfake is another type of scam that leads to disinformation, posing a significant threat to organisations. Cybersecurity practitioners and the industry should focus on how they can support this new menace. Deepfake images, video and audio represent fresh attack vectors. The industry will have to quickly pitch in to handle and manage this issue.

The Dr Hugh Thompson Show was certainly a thought provoking session, delightfully concluding the seventh edition of the RSA Conference Asia Pacific & Japan.


*** This is a Security Bloggers Network syndicated blog from RSAConference Blogs RSS Feed authored by Grace Chng. Read the original post at: