Stop Naming Vulnerabilities – Just Stop

Attention online is fleeting. To make a splash, the feeling is that you need to say outlandish things and the algorithms for the major social networks and search engines reinforce these behaviors. If your work’s success is measured by the attention it gets online, there’s pressure to play this attention-seeking game. After all, if you say something outlandish and don’t get attention, what’s the harm? This is why some security vulnerabilities get way more attention than others. Some vulnerabilities get named, branded and generate a huge online buzz behind them.

But that push is related to the severity of the vulnerability, right?

Sadly, no. It’s rarely linked to the severity of the vulnerability, and that’s a major problem for security teams.

How Vulnerabilities Become Public

Vulnerabilities are constantly being revealed—this is a good thing. It means the security community is working with open source projects and vendors to make things better and more secure.

The process at the heart of this collaboration is called responsible disclosure. It’s not perfect, but it works more often than not.

The idea is simple. A security issue is discovered; it is reported to the open source project or vendor and they work to fix it. When that fix is ready, the issue is made public.

Sometimes there are bumps in the road, but this process serves the community reasonably well.

Evaluating Risk

Once an issue is made public, a lot more work kicks off. Now everyone who is using the affected systems needs to evaluate their response to this issue. This is where things get complicated. That evaluation centers around the risk your security team thinks the issue poses to your business.

That risk is the combination of the potential impact of an event and the likelihood of that event happening.

Part of the vulnerability disclosure process is to assign a score using the Common Vulnerabilities Scoring System (CVSS). This score takes into account how easy it is to take advantage of the vulnerability—if there’s an active attack using the vulnerability; if it’s remotely exploitable and more.

This score helps teams figure out the potential impact of the vulnerability. What about the likelihood of someone trying to exploit that vulnerability, which is the second half of our risk equation?

Is this an incident we need to respond to?

This is an area where the security community struggles. The lack of reliable breach reporting data means that most teams use the potential impact information and their best guess of the likelihood of exploitation to make their risk decisions.

That’s bad.

But it’s also the only choice most teams have.

To make that guess a bit more realistic, teams will scour security groups, social media, research publications and other sources to determine just how worried they should be.

Without solid information on the likelihood of an issue happening, the potential impact weighs disproportionately into a team’s decision to act.

If they decide to act, a team will kick off an incident response process.

This process has the goal of identifying the issue, containing it, resolving it and then restoring systems back to normal operations.

This is a process that is disruptive to your business, and it’s not something to be kicked off lightly as it will pull in other teams from around the business to address the issue. If you start that process for a minor issue, it can erode security’s reputation internally and stop teams from working on their business goals as they address the situation.

Stop the Hype

This is where we see the true negative impact of (over)hyped vulnerabilities. In an effort to get ahead of the issue and be the first with the news, teams brand and pump up the issue to draw attention to it; this skews the information.

In the early stages of any vulnerability disclosure, information can be hard to come by. Hypotheses are floated. Data is gathered. Possibilities explored.

Some of this pans out, some doesn’t. This is normal and to be expected. Technology is complicated and information will change throughout this process.

Inside the hype cycle, though, the slightest guess or suspicion often gets blown out of proportion. That hype makes it harder for security teams to properly evaluate and respond to incidents. Damage done.

“Why is that such a problem?” you ask.

During the early stages of vulnerability disclosure and discussion, the top priority for organizations is to figure out the risk (potential impact and likelihood of being impacted) to their systems.

Branded vulnerabilities tend to get a disproportionate amount of attention—attention they may not deserve. The hype skews the data. It makes it hard to properly evaluate how widespread this issue is.

As a security team, you only get to raise the alarm so many times before other teams don’t take you as seriously and start to push back. You only get to make the wrong call so many times before you lose your hard-won seat at the table.

If you are raising the alarm for a vulnerability that is all hype without the proper context, you’re losing ground.

Instead of drawing much-needed attention to a security risk, the hype is more likely to actively harm your security practice.

The Way Forward

There are new vulnerabilities and issues disclosed constantly. Generating awareness of serious issues can have value to the community. How do we make sure we stay on the side of awareness and not hype?

First, understand that the common vulnerabilities and exposures (CVE) system is in place to provide a common way of identifying specific issues. Vulnerabilities don’t need catchy names; a standardized identifier is more useful.

We need more data on the likelihood of attacks. Mandatory breach reporting, threat intelligence sharing (formal and otherwise) and other efforts are underway. But more work is needed here.

At some point, ringing the alarm bells is useful. That shouldn’t be dictated by the need for more views—it should only happen after a vulnerability meets a certain threshold.

This is hard to put into practice given the nature of the problem and the community. But it’s safe to say that on day one there isn’t enough information to reliably make that call.

Cybersecurity is hard and incident response is exceptionally hard. It’s a process that tries to navigate a constantly shifting set of priorities with new information coming in constantly and the teams involved are under immense pressure.

The goal of everyone involved should be to provide and verify data and help organizations address vulnerability issues. It’s not a time for self-promotion and hype.

Avatar photo

Mark Nunnikhoven

About Mark Nunnikhoven Mark Nunnikhoven is a Distinguished Cloud Strategist at Lacework. Nunnikhoven works with teams to modernize their security practices and to get the most out of the cloud. With a strong focus on automation, he helps bridge the gap between DevOps and traditional security through coaching, writing, speaking, and engaging with the cloud and security communities.

mark-nunnikhoven has 1 posts and counting.See all posts by mark-nunnikhoven

Secure Guardrails