How Criminals Can Exploit AI

Because tools for developing artificial intelligence (AI) sources and tutorials for its use are widely available in the public domain, it is expected that AIs for attacking purposes may become even more prevalent than those created for defensive ones.

“Hackers are just as sophisticated as the communities that develop the capability to defend themselves against hackers. They are using the same techniques, such as intelligent phishing, analyzing the behavior of potential targets to determine what type of attack to use,  “smart malware” that knows when it is being watched so it can hide[,]” said the president and CEO of enterprise security company SAP NS2, Mark Testoni.

Source: “How AI is stopping criminal hacking in real time” by John Brandon

This articles reviews some of the common attack vectors in which cybercriminals could apply the AI technology.

Machine learning poisoning is a way for criminals to circumvent the effectiveness of AI. They study how the machine learning (ML) process works, which is different on a case-by-case basis, and once a vulnerability is spotted, they will try to confuse the underlying models.

To poison, the machine learning engine is not that difficult if you can poison the data pool from which the algorithm is learning. Dr. Alissa Johnson, CISO for Xerox and the former Deputy CIO for the White House, knows the simplest solution against such ML poisoning: “AI output can be trusted if the AI data source is trusted,” she told SecurityWeek.

Convolutional Neural Networks (ConvNets or CNNs) are a class of artificial neural networks that have proven their effectiveness in areas such as image recognition and classification. Autonomous vehicles also utilize this technology to recognize and interpret street signs.

To work properly, however, CNNs require considerable training resources, and they tend to be trained in the cloud or partially outsources (Read more...)

*** This is a Security Bloggers Network syndicated blog from InfoSec Resources authored by Dimitar Kostadinov. Read the original post at: