Making Sense of the Senseless AI in Cybersecurity

It is obvious to anyone who has attended a commercial security conference, been on the receiving end of countless security vendor pitches or tried to sift through what security product to add to their defenses: Artificial intelligence (AI) has permeated the marketing language in cyber. The promise of automation superhuman intelligence is compelling, but most practitioners agree they have yet to realize any tangible benefits. Separating the hype from the value is a tall order and requires assessing what AI can and should do for security teams today.

The ongoing news coverage of companies with talented and diligent security organizations that experience breaches proves that adversaries are defeating existing security implementations. That isn’t likely to change anytime soon, given the motivation, resources and technology that attackers have at their disposal. Rather than putting existing approaches on steroids with AI, the best use of AI today is in helping security teams detect, understand and quickly respond to attacks to minimize risk and damages.

Issues with AI in Current Security

Simply applying AI to what teams do currently is likely to create more noise, further exacerbating the challenging task of defending corporate networks. There are three common problems about the current understanding of AI that can deteriorate an organization’s defenses if the technology isn’t implemented correctly and safely:

Lack of Strategy

Without a strategic framework to guide it, an AI implementation produces an increase in alerts that adds to already maxed-out workloads. It is easy to build models that detect new potential threats, indicators of compromise or anomalous behaviors. On the surface, it appears that these provide additional security; however, these just generate more false positives that distract overburdened security operations teams from seeing real threats.

Lots of Hype, Little Substance

Though many providers present their offerings as intelligence that detects sophisticated new patterns, most AI systems actually provide only a moderate extension beyond previous rule- and signature-based approaches that detect known attack methods. AI is only as powerful as the problem that designers ask it to solve, and oftentimes these implementations attempt to use a one-size-fits-all approach that doesn’t necessarily have any application to a specific organization. The AI systems then don’t understand the networks they are deployed on—especially the specific business risks—and so don’t prevent adversaries from accomplishing their goals without being detected. When pattern detection is static across time and networks, adversaries can profile the detections and easily update tools and tactics to avoid the defenses in place.

Larger Cognitive Load

The scoring systems of most AI implementations seem to be arbitrary because the system doesn’t explain what the scores mean. This leads to a breakdown in trust and understanding with the humans who need to consume and act on the results. When AI isn’t able to support “sophisticated” detections with explanations that security analysts can understand, this adds to the cognitive load of the analysts rather than making them more efficient and effective. Finally, presenting a detection in context of a specific organization, and their business goals and risks, is key to helping responders prioritize and act on the attack.

Making Sense with AI

These problems don’t negate the fact that AI and machine learning can be powerful tools in improving enterprise defenses. However, success requires a strategic approach that avoids the shortcomings of most of today’s implementations. There are three key areas that will help amplify the ability of security teams to work with AI, rather than adding to their problems:

Pay Attention to Adversary Objectives

The value of an AI implementation is defined by the goal it is given to achieve. An effective system requires goals aligned with the business risks, therefore reducing the security team’s workload and automating investigation with a focus on the full adversary objective. Rather than detecting adversary activity such as the tool used or the tactic employed, systems that uncover core behaviors that an adversary has difficulty avoiding will provide security teams with a small number of true business risks to investigate. Effective solutions should have very low false-positive rates, generating fewer than 10 high-priority investigations per week (not the hundreds and thousands of events produced by current approaches).

Know Your Adversary and Your Environment

When the security strategy shifts to focusing on the adversary’s core objectives, the adversary is forced to evolve and shift its approach to better hide in the environments it attacks. Cybercriminals traditionally have the advantage because they can profile defenses and avoid the detections in place. AI systems can gain the advantage by understanding the environment better than the adversary. A system that understands the specifics of an environment can identify unusual behaviors with context that adversaries could gain only with complete access to the full (and constantly updating) internal data feeds that the AI system is provided to learn with.

Enable Interpretable and Actionable Results

AI is supposed to make security analysts’ lives easier, not more difficult. These systems should provide results that automate typical analyst workloads and explain the results in a way that builds trust and, over time, accelerates the skill and experience of humans who use AI tools. The talent shortage security teams face today means that tools must help fill skills gaps with automation but then also provide interpretability and situational awareness. These will help grow the skills of security teams while also making day-to-day operations more efficient and impactful.

Effective Defense with AI

Security organizations can get the upper hand by implementing AI and machine learning in strategic ways that cut through the noise of too many alerts and too little information about results. But they aren’t a panacea. By understanding the specific problems that can arise when applying AI—and the most important things to focus on during an implementation—security teams will be empowered to defend their network with new tools that help make sense of the real risks, rather than being overwhelmed by senseless noise.

Dustin Hillard

Avatar photo

Dustin Hillard

Dustin Hillard, CTO of Versive, joined the company in 2012. He leads the research and development of automating security expertise with adaptive machine learning. The Versive Security Engine has received the SINET16 and ai100 awards for innovative use of machine learning in security. Dustin has published more than 30 papers about building systems that deliver business value via large-scale data processing and machine learning. His work incorporates approaches from many fields and covers supervised, semi-supervised, and unsupervised machine learning approaches.

dustin-hillard has 1 posts and counting.See all posts by dustin-hillard