SBN

The Detection Curveball

  • We analysed malware that was getting through our customers’ detection layers, and how long it takes the detection industry to catch up with fresh samples.
  • Most malware that reaches customer endpoints is fresh: it is not yet generally known.
  • You cannot rely on detection to protect your IP, as it takes detect-to-protect vendors quite some time to update their signatures / rules and detect samples that have made it into the wild.

Is Detection Dead?

We often talk about how detect-to-protect is dead. At Bromium, what we do is predicated on this truth. But I wanted to look at how the malware gets through our customers’ other lines of defence, and how vendors try to claim such high detection rates. I analyzed our in-house customer malware samples and cross-referenced them with historical scan results to see what various detection engines thought of our customer malware samples when they were fresh.

How We Conducted the Analysis

I chose 100 unique recent file-based malware samples. We have individually analyzed and confirmed these as being malicious internally. It turns out that 19% of these samples were still unknown in the wild after a month — either they were unique polymorphic samples or just not appearing frequently enough to have been picked up in threat hunting operations. For the purposes of our analysis, we’re looking at the remaining 81 samples that became generally known.

We have access to historical scan results from a variety of vendors. When a malicious sample is fresh, at the point it is first scanned only 43.4% of these products spot it’s bad (on average). This is with latest DAT files and access to the full malware sample for scanning (not just the hash).

See the following graph — it’s computed over the 81 samples for which we have data and adjusted to time=0 being the time a sample was first actually scanned by the basket of detection tools. I have removed the poorest detection tools here too; this is the average of the 15 best engines (as ordered by overall number of correct positive results). It turns out that other ways of selecting the tools to include lead to broadly the same graph and I felt this was the fairest.

Malware detection with anti-virus (AV) software

Detect-to-protect vendors sometimes tout their alleged protective abilities by demonstrating their products detecting from a static set of non-fresh samples and showing lots of things whizzing past that they magically detect. You can bet that these sample sets are carefully curated, but time plays a huge factor too — by the time you see the demo these samples have become generally known and engines have been updated.

Since we chose from a set of samples found by Bromium customers, all the samples were by definition detected by Bromium using our behavioural detection engine (which we know is imperfect too). The samples in this set all ran safely, protected by our strong application isolation.

In the field, which is where companies are actually being attacked, people don’t receive ancient samples — malicious actors are continually updating their wares. We found empirically that hunting for Emotet samples gave us malware first scanned less than two hours previously, and not yet detected by most engines.

To my eyes, this graph already paints a gloomy picture, but we can also populate it with aggregate data about when the malware was actually opened by our customers relative to the above. I’ve re-drawn the graph showing the time that the malware samples were safely opened by a Bromium user, relative to the time they were first scanned in our data (i.e. 0 on the x-axis). It turns out that the median malware sample in the set is first run by a Bromium user 7.0 hours before it is first scanned; a time at which the industry detection rate is (presumably) worse than 43.4%.

Malware detection with anti-virus (AV) software)

It also turns out that in our customer data 75% of the malicious files were opened by a user before they had been first scanned. By definition, the detection rate is unknown before this point, but it will be less than 43.4%. Thankfully all these users were protected by Bromium isolation.

My colleague Adrian Taylor talked a bit about the kind of samples that evade detection in a recent SANS webinar with Alissa Torres. To summarize briefly, the data in our corpus is mostly taken from large commercial organizations with a substantial investment in other security products, where Bromium is effectively their last line of defense. Here’s another way of looking at this detection-in-depth, with the layers stacked as a funnel:

Malware detection funnel

At each level, detection is not perfect and some malware gets through. So, the data in my analysis relates to malware that has evaded several lines of such defences: perimeter firewalls, perimeter sandbox detonation devices, mailserver anti-virus, EDR / NGAV and endpoint anti-virus. It’s obviously not surprising therefore that we are dealing with samples that weren’t initially detected by the tools that the particular customer had deployed alongside Bromium, but it is quite disappointing how long it takes detect-to-protect vendors to update their signatures / rules and detect samples that have made it into the wild.

Even after 30 days, the average malware detection rate is 85.5% from the 81 samples.

As we mentioned earlier, Bromium does not rely on detection – applications are isolated in a lightweight micro-VM where they are fully isolated from the rest of the endpoint, and any malware they open can’t cause any damage.

To Protect Your Genius you need to isolate malware from your systems. Detection products sadly let some malware through, as our customer data shows. Seeing is believing — request a demo and see Bromium in action.

The post The Detection Curveball appeared first on Bromium.

*** This is a Security Bloggers Network syndicated blog from Bromium authored by Matthew Rowen. Read the original post at: http://blogs.bromium.com/how-malware-evades-existing-detection-tools/