SBN

Highlights of AI Village DefCon China 2018

At the DefCon2018 conference held in China on May 12, hackers and data scientists raised vivid discussions on cyberattacks with the use and abuse of machine learning and possible solutions.

It goes without saying that artificial intelligence is now actively used in most security technologies as well as in a wide range of attacks. Attack vectors have become more advanced and sophisticated. If you are curious, there is a remarkable series of posts related to AI and cybersecurity on Forbes, revealing how AI-driven system can be hacked, detailing seven ways cybercriminals can use ML, and uncovering the truth about ML in defense.

Today cyberattackers are less interested in traditional platforms but target self-driving cars, human-voice-imitation and image-recognition systems. It stands to reason that the release of new technology product means the development of attack techniques and the subsequent addition of concerns to an ever-growing list.

This review provides the brief description of DEFCON presentations on the security issues that are closely connected with AI and ML aiming to clue the world in on the latest use and abuse of artificial intelligence in cybersecurity. The talks cover topics ranging from vulnerabilities of machine learning tools to reports on the malicious ML deployment.

StuxNNet: Practical Live Memory Attacks on Machine Learning Systems

The behavior of ML systems depends less on specific machine opcodes and more on weight vectors and bias parameters. This makes a huge difference in terms of possible threat models.

Prior work are mostly focused on generating adversarial inputs to exploit machine learning classifiers (e.g., how to fool face recognition system by wearing a special sunglasses), and there were no attempts to modify the model itself. Researchers demonstrated the proof of concept malware to hack Neural Networks on Windows 7, highlighting different training paradigms and mechanisms cyberattackers could use. The speakers provided two videos, the first revealed the Naive Attack, and the second – the Trojan Attack. They displayed the devastating potential of a patched network and sparked a discussion around the AI security of systems level.

Authors proved that with selective retraining and backpropagation, it is easily retrain networks so that an attacker looking to compromise an ML system could simply patch these values in live memory, thereby taking control of system with minimal risk of system malfunctions.

Machine Learning Model Hardening for Fun and Profit

Developers increasingly use machine learning in systems that manage sensitive data, and this fact presents more bait to cyber perpetrators. Even if your company implements out-of-the-box applications, they could get through security and access this important organizational information. To improve the privacy and security of these systems, it is recommended to apply such techniques as differential privacy and secure multi-party computation. The speaker assumed that implementing a vanilla ML model API with no model hardening is a poor idea and talked about black box access to neural network-based ML models.

Homomorphic encryption can perform computations on encrypted information so that adversary can’t read data but the statistical structure is preserved. Fully homomorphic encryption schemes are incredibly slow.

Secure multi-party computation means that multiple parties can jointly compute a function while keeping the function input private. Although it is cheaper than homomorphic encryption, it requires more interaction between parties.

As for differential privacy, adding or removing an element from the data doesn’t significantly change the output distribution. It is slow, but even works in scenarios where the adversary has full knowledge of training mechanisms and access to parameters. You can extend your knowledge of differential privacy and read Dwork (2006), and Dwork and Roth (2015).

The presentation identified and illustrated the threat models solved with these techniques.

  • Model inversion and adversarial examples (Goodfellow et al, 2015; Papernot et al, 2016) – a categorization model/API that provides confidence values and predictions, it’s possible to recover information encoded in the model through the training data (Fredrikson et al, 2015; Xu et al, 2016).
  • Memorization, were a known data format like a credit card number allows extracting the information by using a search algorithm on the model predictions (Carlini et al, 2018).
  • Model theft – the black-box access enables to construct a new model to closely approximate target (Tramèr et al, 2016).

The talk considered the modern ML pipeline and identified the threat models solved with these techniques. Furthermore, it evaluated the possible costs to accuracy and time complexity as well as presented tips for hands-on model hardening:

  • Give users the bare minimum amount of information
  • Add some noise to output predictions
  • Restrict users from making too many prediction queries
  • Consider using an ensemble of models and return aggregate predictions

There was a kind of a conclusion that differential privacy is the most reliable method of model hardening (Papernot et al, 2016; Papernot et al, 2018; Tramèr et al, 2018).

The author gathered general observations that it is more practical to think about model hardening from the perspective of black box access, although some techniques work as white box augmentations. Most attacks are trying to net information held in the model even if the data is encrypted. They rely on the preservation of statistical relationships within the data, which is not obfuscated by most cryptographic techniques.

AI vs AI: Undercover the Billions of Black Market in E-Commerce

In view of AI development, the black market in e-Commerce has seen a vast abuse of AI technologies. This evolving and lucrative black market attracts thousands of scalpers and hackers. It costs billions of reputation and financial losses to companies like Alibaba and Amazon.

This presentation provided real examples of how hackers target large E-Commerce companies. Traditionally, cyberattacks were full of manual work and low tech, now they became AI based, take as an example an AI-based distributed CAPTCHA solver.

A complete industrial chain consists of upstream (platform doing verification on code, image, voice, text, etc.), midstream (various accounts related services and exchanging platforms such as fake account registration and account pilfering ), and downstream (gaining profits scalper, fraud, theft, blackmail etc.). Therefore, the presentation uncovered the industrial chain of the black market and the detailed social division of labor, as well as various advanced tools.

The point is that JD.com presented their approach to defense against attacks. For instance, they detect scalpers by applying NLP on IM messaging. Bot detection was another area where AI was necessary. Moreover, they included biometrics features such as mouse movement and keyboard.

If you look at the screenshot that depicts mouse movement, you can see that it is possible to use vanilla CNN networks to classify bot behavior.

Workshop: Pwning Machine Learning Systems

The use of machine learning by cybercriminals is an emerging strategy. So how do hackers put machine learning algorithms to work?

Pwning machine learning systems workshop gave insight into the world of adversarial machine learning. It focused on practical examples helping start pwning ML-powered malware classifiers, intrusion detectors, and WAFs.

Two types of attacks on machine learning and deep learning systems were covered, and they are model poisoning and adversarial generation. A docker container is provided for you to play with the examples.

Machine Learning as a Tool for Societal Exploitation: A Summary on the Current and Future State of Affairs

You might notice the overlaps between cyberattack and defense. Like any tool, AI can serve for both criminals and defenders on different ends of the spectrum, sometimes without much modification.

The talk started with a brief analysis of the current state of ML-related security, ranging from location mapping through ambient sounds to Quantum Black’s sports-related work and various endpoint detection systems in different states of development. It evoked discussion on the adoption of machine learning to escalate cyber warfare and ended with the concept of ‘fooling’ ML software providing a simple instance of the effect it can have on human profiling. If videos will be available, it may be worth it to have a look.

Scrutinizing the Weakness and Strength of AI Systems

Machine learning methods like decision tree and K-nearest neighbor can provide end users with an explanation of individual decisions and even let them analyze model strengths and weaknesses. Learning models like deep neural networks (DNN) are unclear therefore they have not yet been widely adopted in cybersecurity, say, in defending against cyberattacks. Nonetheless, as a rule, they exhibit immense improvement in classification performance. It is an honor to see that ERPScan who currently use deep neural networks for threat detection is one of the pioneers in this area.

This talk introduced techniques that can yield explanations for the individual decision of a machine learning model and facilitate one to scrutinize a model’s overall strengths and weaknesses. The speakers demonstrated how these techniques could be utilized to examine and patch the weakness of machine learning products.

Numerous current research papers leveraging deep learning for cybersecurity solutions were mentioned.

  • Binary Analysis (USENIX15,USENIX17,CCS17)
  • Malware Classification (KDD 17)
  • Network Intrusion Detection (WINCOM 16)

The number of papers that describe deep learning used to solve cybersecurity tasks is growing especially in the field of malware analysis.
The list can be extended at least with the following research papers:

Back to the presentation, they evoked one of the most important discussions that is interpretability of Deep Learning models. If we talk about image recognition, show a group of important pixels, for sentiment analysis – show keywords, for malware – detection. Which parts of the program make DL identify this instruction as a functions start?

There are two general approaches to interpretability – white box and black box. White-box approaches may have an effect and give amazing results. Nonetheless, they are adapted to common architectures like image recognition. As for security applications, white-box approaches are difficult to implement. The hidden layer representations cannot be understood comparing to images, and the hidden representations of binary code can not be interpreted.

The existing black-box approaches are intuitive.

Once again, Deep Learning model is highly non-linear. Simple linear approximation is not a good choice when a very precise answer is needed while dealing with cybersecurity.

The researchers proposed their own approach – Dirichlet process mixture regression model with multiple elastic nets. Precise approximation with help of mixture regression model is to approximate arbitrary decision boundary. And elastic net is to enable mixture model to deal with high dimensional and highly correlated data. In addition, they select only the most valuable features.

There is a fun outcome, those features which became central are actually can be used to generate adversarial samples.

In general, this talk is informative, and it is a food for further reflection.

Facilitating Postmortem Program Analysis with Deep Learning

Despite the fact that in-house software testing is an intensive process and developers do their job right, there are weaknesses inevitably in the programs resulting in crashes. Software analysts have to accomplish a long chain of time-consuming postmortem program analysis tasks in order to identify the root cause of a software crash. Since the effectiveness of the postmortem program analysis depends on the capability of distinguishing memory alias, alias detection is named a key challenge.

The researchers introduced a recurrent neural network architecture to enhance the capability of memory alias analysis and concluded that their DEEPVSA network facilitated and improved postmortem program analysis with the help of deep learning.

Automatic debugging basically requires three actions:

  1. Track down the root cause of a software crash at the binary level without source code
  2. Analyze a crash dump and identify the execution path leading to the crash
  3. Reversely execute an instruction trace (starting from the crash site)

The results are as follows:

  • DEEPVSA implements a novel RNN architecture customized for VSA;
  • DEEPVSA outperforms the off-the-shelf recurrent network architecture in terms of memory region identification;
  • DEEPVSA significantly improves the VSA with respect to its capability in analyzing memory alias;
  • DEEPVSA will enhance the accuracy and efficiency of the postmortem program analysis.

Summary

The recent DEFCON China 2018 conference is a practical conference dedicated to AI in cybersecurity. Comparing to common events, which are mostly focused on academic papers solving adversarial issues.

The post Highlights of AI Village DefCon China 2018 appeared first on ERPScan.