No company has ever been breached as a result of a high-severity alert. There may be a few exceptions, but high-severity alerts are not generally what’s keeping security teams up at night. Organizations are well-prepared to diffuse a threat once it’s identified and deemed a priority. Breaches overwhelmingly occur when threats are misrepresented, misprioritized or missed completely.
At the most basic level, every company has a system for prioritizing alerts and mitigating threats based on that prioritization. High-severity threats are handled at the highest level—sent immediately to a senior analyst who launches an in-depth investigation, mitigates as needed and closes once resolved. But what about all other alerts?
There are three major scenarios that plague security teams:
- False positives and duplicate alerts: Most organizations rely on static rules deployed across multiple security devices with the goal of identifying any and all anomalies and malicious behavior within an organization. Without the ability to apply dynamic rules specific to each device, legitimate activity can unintentionally be identified as malicious and duplicate alerts can be generated by different devices. Security analysts are then burdened with manually sifting through these false positives, taking valuable time away from addressing actual threats. According to a 2017 survey conducted by the Ponemon Institute, companies waste an average of 425 hours a week responding to and investigating false positives, costing them an average of $1.37 million, annually.
- Evolving incidents: When a low-priority threat is identified, it is either closed on sight as a false positive or duplicate, addressed using a programed response via automation or sent to a junior analysts for minor investigation. However, once the incident is closed it is no longer tracked. If a related event occurs in the future, it is treated as a new incident instead of part of the previous, related threat. Without past context, a security team could end up managing a series of individual events that together meet the criteria for a high-priority incident. Hackers can use this as a workaround to avoid triggering the highly effective process for dealing with a high-priority incident.
- Alert overflow: An Enterprise Strategy Group report from last year found that more than half of organizations admit they ignore alerts that should be investigated because they lack resources to handle the overflow. More than one-third of respondents found it tough to keep up with the volume of alerts, and nearly 30 percent struggled because security operations tools weren’t well-integrated. This means a threat could slip by in the ignored data, and there is no way to know until it’s too late.
Too Much Data, Not Enough Intelligence
The biggest reasons these challenges are so pervasive is due to poor management at the data-level. Orchestration and automation tools have been introduced to help once an alert is generated, but more can be done before an incident reaches this stage.
For example, static rules that were put in place when security teams were dealing with significantly less data are still relied on today. With the widespread adoption of artificial intelligence, nothing in a security stack should be static. Rules should be programmed to adapt to new circumstance so that alerts are more informed and, therefore, properly prioritized. Also, historical security data should be retained and applied to new events in real time. This can provide the missing context needed to understand the evolution of incidents within the unique context of an organization.
Eliminating false positives and duplicate alerts, understanding evolving incidents and freeing up resources to avoid overflow allow security teams to be more effective at resolving all threats, not just the high-priority ones.