SBN

Is Vulnerability Management Hopeless?

No, but you have to decide how much you’re willing to change to make it more effective…

Can billions in TAM be wrong?
In his blog post “Is Vulnerability Management Hopeless,” Gartner analyst Anton Chuvakin’s wonders if anything can be done to overcome the deluge of detected – but unmitigated – vulnerabilities. He cites that only an average of 10% of detected vulnerabilities are remediated.

While statistics like this paint a bleak picture, I believe the answer to Chuvakin’s main question is “No,” vulnerability management is not hopeless.

But the bigger question is — should you optimize tools and practices in support of existing risk regiment, or fundamentally change the risk and supporting vulnerability discovery culture? For his part, Chuvakin makes four suggestions (along with a fifth escape clause: “something else new”).

Optimizing existing approach
Chuvakin’s first suggestion – prioritize better – is the meatiest. Industry voices have been prescribing “a risk-based approach” to prioritizing vulnerability discovery output for decades. In response, vendors have learned to couch dashboards in terms of risk. What’s the problem? The visibility that vulnerability discovery tools provide is too fragmented; each focuses on a sliver of the stack: host, network, software, OSS, or now container security. Because of this, organizations only view risk in silos, while they desire a more holistic picture at levels that business can manage: product or value stream delivered to customer, business unit, or region.

A market segment has emerged to provide features in pursuit of the holistic view. Solutions in the AVM, TVM, and AVC spaces “correlate findings” from vulnerability discovery tools. Here’s the recipe that ZeroNorth finds useful in providing customers a holistic view:

  1. Cut out the junk: whether it’s to show improvement, win bake-offs, or various other reasons, vulnerability discovery tools report a lot of issues that will simply never bear impact as a security risk. These can be wholesale removed from reporting.
  2. Normalize findings scores from products using industry standards. Every tool scores differently, some defiantly so. Organizations often prefer industry-standard scoring, such as CVSSv3. However, some OSS maintainers may not have had bandwidth to yet support this kind of output. Supporting CVSS means collecting scoring input from findings and asset information provided from dissimilar tools at different lifecycle phases.
  3. Relate several views on the same software value stream. Organizations desiring a holistic view of risk want more than correlation of SAST and DAST results. Experiential data show us that each discovery technique is likely to only find about 5-20% of the overall critical and high vulnerabilities that exists in a deployed system anyways. By tracking software and infrastructure through their lifecycles, and collecting and relating the vulnerability discovery data that results, organizations can gain a holistic view of vulnerability in context of the full application and its operating environment.
  4. Generate ‘Units of Work’. Yes, notification integrations are great, but vulnerability discovery tools bury engineers in tickets because they emit tens of instances of a developer error within the same file or function, or because they find the same exposure of a running / vulnerable service through different access points. This information is valuable, but engineers should be presented with a consolidated form, so they can address it as a single “unit of work.”
  5. Orient vulnerabilities to responsible parties. Regardless of how cross functional an organization is, it will be organized into teams, business units, and regions. Being able to tie deployed artifacts and their operational behaviors back to the pipelines, code bases, and teams that created them allows Dev and Ops to conduct a contextually richer conversation about how to mitigate risk.

Replace the Dominant Risk Management Religion
It is said that businesses are in business to take risks. And for over a decade organizations have increased their agility so as to bring more value to their customers faster through software. Continuous delivery is the asymptote this trend approaches. Clearly organizations are taking a “risk on” posture when it comes to their software. How is security evolving to match this?

As an evolution of the BSIMM study, Sammy Migues and I studied about twenty firms that you’d recognize as thought leaders in the DevOps space. We found some very interesting trends that didn’t fit with our decades of helping firms proactively “build security in.” Yes, these firms were tackling select security activities (such as threat modeling and select secure design of authN/Z, encryption, etc.) proactively, but much of what they were doing was unrecognizable to established practice for vulnerability discovery activities.

No, these firms hadn’t simply found a magic mix of vulnerability discovery OSS that worked “at the speed of DevOps,” they’d fundamentally changed their approach to risk management.

Firms had forsaken proactive security governance through security assurance … for reactive risk management through telemetry collection throughout software development and operation … and coupled this with improved resiliency

In simple terms:

  1. The pace of SW delivery critically limits the business; and
  2. SW can not afford to slow the pace of SW delivery; therefore
  3. Organizations are resigned to delivering SW at risk.

The risk paradigm has thus shifted:

  1. If we’re committed to deploying risky software, the telemetry we collect must be more quickly obtained and higher resolution; and
  2. We must be able to trace identified risks to responsible code bases, pipelines, and teams. Likewise,
  3. Our delivery pipelines must be fast enough to respond to whatever risk mitigation is prescribed.

This makes sense. Organizations won’t slow delivery down for security — but they’ll strive to dramatically increase their visibility to potential risks to stay ahead of attackers.

The consequence of this has pretty foundational implications for vulnerability discovery tools. (Point in time) scan-based models support security assurance activities, but not real-time telemetry for continuous delivery pipelines or production operations. Consider the difference between the productization of Software Composition Analysis by those that lead in this market vs. Container Security vendors — who offer a similar capability (but often without the scan-based operating model). We’ll see this trend to “Continuous visibility” from multiple sensors, aggregated to score business assets as legacy “scan based” engines disappear behind new productization.

Not Ready to Change Religions?
Is your organization ready to do what DevOps luminaries have, and exchange proactive security assurance for continuous full-lifecycle telemetry and resilience? Maybe not. Do you need to wholesale convert? Definitely not.

But, by incrementally increasing your organization’s ability to prioritize vulnerability data as described above, and by finding ways to exchange a point-in-time scan for continuous business asset risk visibility, you can reduce the pain and frustration of vulnerability discovery: mountains of false positives and unsustainable security operations costs.

At ZeroNorth, we walk this path and look at “SW-defined security governance” as our north star. The promise that we can provide a platform that helps with prioritization while adjusting the usage and reporting modality of those vulnerability discovery tools firms already rely on gives us faith that: no, vulnerability management is not in fact hopeless.


*** This is a Security Bloggers Network syndicated blog from Blog | ZeroNorth authored by John Steven. Read the original post at: https://www.zeronorth.io/blog/is-vulnerability-management-hopeless/