The Kill Chain Model Works When Analysts See the Full Picture

Just about every cybersecurity professional is familiar with the cyber kill chain, a set of steps bad actors typically go through with the end goal of stealing valuable data. Reconnaissance. Weaponization. Delivery. Exploitation. Installation. Command and Control. Actions on Objective.

Sound familiar? The goal of cyberteams is to detect bad actors early as they move through the chain. However, despite the fact that we have known the typical moves of an attacker since at least 2011, when Lockheed Martin published the kill chain model, bad actors continue to steal valuable data. For years, cyberexperts have pushed back explaining why the kill chain model can hurt more than help when it comes to protection. While I agree with some of their viewpoints, I think the kill chain can be a successful defensive tactic if security technologies played nicer in the sandbox.

To detect attackers moving through the various phases of the kill chain, many companies use different tools that capture different data. The tools operate in silos, creating a fragmented view of someone moving through the kill chain, and preventing the security analyst from seeing the full picture. For example, in the delivery phase, a proxy detects a large malware download, while an endpoint protection tool detects someone in the exploitation and execution phases. A proxy tool detects the command and control phase and data loss protection flags the exfiltration stage.

Given the siloed approach of most organizations, security analysts are forced to look at the data coming from each individual tool and are not able to connect the dots between them so that they can identify a cohesive story. Another issue security analysts face is that the tools lack context. For example, a bad actor steals a user’s credentials, logs into his machine and then tries logging into another machine in the environment. An access management tool would spot the person logging into the machine, and then see attempted logins on another machine, causing an analyst to possibly conclude a user was compromised. However, it could just be another employee who forgot her password, trying to log in. Without context, or data from the other tools (i.e. if malware was planted on the initial machine), the analyst would not know. Analysts only see an incident from the view of one toolset, and therefore cannot understand how it fits into the kill chain pattern.

More mature companies use a log aggregator tool such as a security information and event manager (SIEM) to connect the dots. However, an analyst must build rules so that the SIEM knows what to look for that may indicate a bad actor is moving through the chain. In other words, if you see 100 login attempts on one machine, flag it. Or, if there’s malware and a data loss prevention alert on the same machine, that’s a sign of a bad actor in a kill chain. Analysts face significant pressure trying to figure out which series of events may indicate an attack. Some rely on experience, while others rely on the gut factor. Many end up wasting time investigating alerts that end up being noise or false positives.

As a user and entity behavior analytics (UEBA) vendor, I would love to say UEBA is the solution. However, while UEBA detects unusual activity, it doesn’t connect the data coming from the disparate tools. Companies need a cyber-risk analytics layer on top of UEBA that brings together the UEBA output, data coming from the multiple tools and context such as whether the asset is at risk of a compromise of high value to the company. If the asset is of high value, then the event is worth investigating. If the asset, if compromised, causes minimal impact to the company, then the event should go on the back burner.

With cyber-risk analytics, UEBA and contextual information all in one place, analysts don’t need to hunt for an attacker moving through the chain. They receive a prioritized list of events to investigate based on how much risk they pose to the company and a body of evidence explaining why. With this approach, for example, an analyst can identify a compromised user account of a user who was the victim of a phishing campaign that led to the download of a malware package that has infected the network. The analyst also can see the account is being used to prospect around by showing evidence of multiple logging failures. All that attack information is evident before the account is used to actually exfiltrate data out of the environment.

A SIEM is still an important piece of the puzzle, since it aggregates and stores data. With the endless amount of data generated by security tools, there needs to be a place to store it. But SIEMs need an extra level of intelligence, such as analytics.

It’s important to note that ultimately, technology cannot do it all. We need a human to look at the body of evidence, decide whether a series of events are in line with the kill chain and take action. However, technology also can be used to detect and prioritize events so analysts can spend their time investigating and remediating rather than hunting.

Humphrey Christian

Avatar photo

Humphrey Christian

Humphrey Christian is VP of Product Management at Bay Dynamics, a cyber risk analytics company that enables organizations to quantify the business impact of cyber risk from both insider and outsider attacks. Humphrey has over 16 years of experience designing and implementing data analytics solutions. Since joining Bay Dynamics in 2002, Humphrey has directed the product strategy, architecture, and implementation of the widely adopted IT Analytics and Risk Fabric products. He began his career in IT as a member of the Accenture consulting team. Humphrey holds a BS in Computer Systems Engineering from the University of Massachusetts.

humphrey-christian has 2 posts and counting.See all posts by humphrey-christian