Assessing next-generation protection

Latest report now online.
The amount of choice when trialling or buying endpoint security
is at an all-time high. It has been 36 years since ‘anti-virus’ first appeared
and, in the last five years, the number of companies innovating and selling
products designed to keep Windows systems secure has exploded.

And whereas once vendors of these products generally used
non-technical terms to market their wares, now computer science has come to the
fore. No longer are we offered ‘anti-virus’ or ‘hacker protection’ but
artificial intelligence-based detection and response solutions. The choice has
never been greater, nor has the confusion among potential customers.

While marketing departments appear to have no doubt about
the effectiveness of their product, the fact is that without in-depth testing
no-one really knows whether or not an Endpoint Detection and Response (EDR)
agent can do what it is intended.

Internal testing is necessary but inherently
biased: ‘we test against what we know’. Thorough testing, including the full
attack chains presented by threats, is needed to show not only detection and
protection rates, but response capabilities.

EventTracker asked SE Labs to conduct an independent test of
its EDR agent, running the same tests as are used against some of the world’s
most established endpoint security solutions available, as well as some of the
newer ones.

This report shows EventTracker’s performance in this test.
The results are directly comparable with the public SE Labs Enterprise Endpoint
Protection (Oct – Dec 2018) report, available here.

*** This is a Security Bloggers Network syndicated blog from SPECIAL EDITION authored by Simon PG Edwards. Read the original post at: