An anonymous report claims that a ton of your company’s customer data has been exposed. A sense of calm is in the air as you enact your vulnerability disclosure policy. You save the day, get a promotion and rainbows and unicorns fill the sky. Then you wake up!! You don’t have a vulnerability disclosure policy. Panic quickly washes away the sounds of harps.
You’ve got to verify this incident quickly, you’ve got to handle it (mitigation and disclosure) well and you need to carefully manage the narrative in case the story goes public. This isn’t one of those ‘2 out of 3 ain’t bad’ scenarios — you need to do all three. More than anything though, this information needs to get to the right people quickly to avoid making the problem worse. Who are the right people!?!?
In many cases, when someone stumbles upon exposed data, their first impulse is to report it. Your job is simple — make it easy for them to do the right thing. In some cases, this individual is a customer and realizes their own personal data might be at risk. They want you to know about the problem and they want it to be fixed, because the risk is personal.
Making it easy to report issues doesn’t have to be a lot of work. A simple web page or form that’s easy to find and includes instructions on how to report issues can go a long way.
So why do we often see it handled so badly?
The recent Panera Bread debacle is an excellent example of how NOT to handle vulnerability disclosure.
In most cases, the situation is handled badly because no one has taken the time to prepare for this sort of situation. Either that, or the business is suffering from ‘nothing-bad-has-happened-yet’ syndrome. Yes, there are a lot of things a business has to worry about, but, for things like insurance and breach preparation to be effective, they need to be in place before the loss event occurs.
Any organization that makes a product, software, has a significant Internet presence or collects customer data needs a process for issues and vulnerabilities to get reported and communicated. Even a basic effort to open communication channels can prevent a future PR mess. This article describes how to build this process and why it’s important for every business to have one.
What’s a Vulnerability Disclosure Policy?
Put simply, a Vulnerability Disclosure Policy (VDP) describes how you ensure cybersecurity-related issues get reported to the right people as quickly as possible. If someone finds a vulnerability in a website, product or software; or discovers leaked data belonging to a company, a VDP describes the means for reporting it. The concept is simple, but the execution is not. It takes experience and an understanding of the hacker perspective. Wanting to be kind and responsible, many try to report security issues, but that’s often where things fall apart.
The good news is that there are tons of examples in the public record to learn from (both good and bad). The bad news is that, while most organizations are aware of these examples, translating them into best practices is not intuitive. A statistic often shared by Katie Moussouris, author of the ISO standard for vulnerability disclosure, is that 94% of the Fortune 2000 have no published policy for vulnerability reporting. This statistic comes from a HackerOne report that shows no change between 2015 (94%) and 2017 (94%) despite an marked increase in bug bounty activity over these two years.
In a perfect world, the disclosure process would go like this:
- Sam stumbles on an open S3 bucket full of sensitive, proprietary data belonging to ACME Corp.
- Sam visits the company’s website and finds a contact for reporting security issues, security at acme.com and sends an email with details about the discovered issue.
- Someone at ACME receives the email and thanks Sam. A ticket for the issue is created internally and is assigned to the cloud team.
- The cloud team corrects the issue within a few hours of Sam first finding the open S3 bucket.
- Ashley thanks Sam again and asks for confirmation that the bucket is no longer open.
- Sam validates that the issue is resolved and has warm fuzzy feels from doing a good deed.
Depending on how mature ACME’s vulnerability disclosure program is, the reward for finding the bug might be limited to a simple thanks or modest public recognition, like inclusion on a ‘wall of fame’. On the other end of the scale, bugs could be worth serious money.
In the real world, things don’t always go as smoothly as the ACME example, especially when the business has never considered how to handle an external vulnerability report. When 94% of businesses don’t even have an established process, there’s a good chance that things are going to be handled badly when that first report comes in. Unprepared organizations have been known to meet reports with confusion and suspicion. There are too many examples where a simple vulnerability report is handled badly.
There are situations where the business is legitimately unaware of the individual trying to contact them. There are situations where the business is intentionally ignoring them or downplaying the seriousness of the reported issue. In many cases, it becomes a PR debacle for the organization involved. Following are some of the most common scenarios.
Lack of Response
A common occurrence when notifying a business of an issue is no response at all. Attempts to contact someone via email or even phone are met with silence. Maybe they’re not home. Maybe the person that was supposed to be reading emails from the security mailbox neglected to reassign their responsibility before leaving the company? Maybe there is no security mailbox and the researcher has to choose between the info, postmaster or pr mailboxes. Typically, the folks on the other end manning those email accounts have no idea what a SQL Injection is or why they should care about it.
Shooting the Messenger
The company overreacts and ‘shoots the messenger’. The company is freaked out by the vulnerability and has a hard time separating the person that reported the issue from the issue itself. As if somehow the vulnerability wouldn’t exist if the discovery hadn’t been made by this ‘troublemaker’. In some cases the business threatens the individual or even takes legal action against them. We’ve seen individuals arrested, expelled, threatened, sued and blamed for reporting issues.
The company acknowledges the issue but only to make the finder go away. They don’t intend to actually fix the issue, or at least, they’ve set the priority so low that it might as well have never been reported. This is often the most frustrating category of responses and has resulted in a lot of mistrust between researchers and businesses. More on that in the next section of this article.
Best Friend or Worst Enemy — It’s Your Choice
It took one day to report, 8 months to anger the researcher enough to go Full Krebs.
Note in the ACME example above, that Sam, the individual that discovered the issue, is an important part of the process. While some individuals are fine to just report an issue and won’t ever think about it again, most security researchers will follow up on issues they report. Security researchers, whether stumbling across something or intentionally seeking out flaws, are less likely to ‘fire-and-forget’ with vulnerability reports.
Researchers are keenly aware that they may be the only person on the planet that knows an issue exists. Of course there are many who bug hunt as either a part time gig or in fact their full time job, so keeping tabs on it ensures they can pay the bills. But there are many that also have a strong sense of responsibility that drives them to ensure that the vulnerabilities they discovered get fixed and aren’t just swept under the rug.
The recent Panera Bread snafu is a good example of this dilemma. In the case with Panera Bread, the researcher’s concerns were justified. He waited 8 months before becoming frustrated and taking the issue public.
It took one day to report, 8 months to anger the researcher enough to go Full Krebs.
Quoting the KrebsonSecurity.com article:
Asked whether he saw any indication that Panera ever addressed the issue he reported in August 2017 until today, Houlihan said no.
“No, the flaw never disappeared,” he said. “I checked on it every month or so because I was pissed.”
Panera had an opportunity to resolve the matter quietly, but, as happens all too often, it took several rounds of public shaming to get the problem taken seriously. The threat of public exposure is unfortunately the most effective and primary tool for the individual to ensure businesses take care of security issues.
Some researchers are so disillusioned with the corporate world, they opt for ‘full disclosure’, meaning that their first and only move is to go public with what they’ve found. This may seem unfair, but in many cases the reason individuals stumble on these issues in the first place is because they are customers or clients. Part of the data being exposed is theirs, which can make the situation personal.
Also, if we look at this dilemma from the researcher’s perspective, it’s often a single, well-meaning individual against a huge corporation like Sony, Apple or Microsoft. Threatening to publicly expose the details of the issue is often, in the researcher’s mind, the only card they have to play if they ever want to see it fixed. In some cases, like with Google’s Project Zero research team, a standard time limit of 90 days is attached to every vulnerability reported. Most of the time, this compromise works well, but occasionally (especially with Microsoft, it seems) it still generates friction.
Creating an Effective Vulnerability Disclosure Policy
The beginning of this piece emphasized the importance of handling vulnerability disclosure well. “Handle it well”, however, doesn’t mean ‘make no mistakes’. Handling it well means communicating often and clearly, being transparent about the issue, and addressing the issue in a reasonable time frame. A VDP could be as simple as an email address posted on a website with some instructions on how to report issues. This wouldn’t be a very good VDP, but at least there would be a functional line of communication and a clear way of finding and using it. It could be a form on the company’s website or it could be hosted by a bug bounty platform provider like BugCrowd or HackerOne.
Some random Googling revealed a great example of a VDP on the website of an Indian business named Yatra. In place of a VDP, Yatra has a bug bounty program listed. The website also includes a form for reporting bugs and a wall-of-fame for those who have found and reported bugs in the past.
At a minimum, a VDP should:
- Make it easy to find contact information and report issues to the right person. It’s important for this contact to be specifically for reporting bugs and security issues! The sales team probably won’t know what an “open S3 bucket” is or know what to do with that information.
- Clear instructions on what to report through this channel and how to report it.
- Communication expectations — an SLA at the very least. It is important to stick to this SLA. Over-communication is preferable to long, extended periods of silence.
Also, as previously mentioned, an ISO standard, ISO/IEC 29147, has been developed to formally describe how to establish a vulnerability disclosure policy, and is freely available on the the ISO/IEC Information Technology Task Force web site. Troy Hunt, best known for his Have I Been Pwned breach information database, has some fantastic guidance on vulnerability disclosure as well.
Next Steps: The Vulnerability Handling Policy
While the VDP focuses on communication with the general public about issues, a Vulnerability Handling Policy (VHP) is entirely focused on how the vulnerability is prioritized, fixed and deployed internally. The VDP and VHP are closely tied as the following diagram details.
Having a Vulnerability Disclosure Policy is becoming as essential, if not more important than, a set of Terms and Conditions. But like anything, the first step is admitting that you have a problem. This becomes an issue in a world where getting hacked and having vulnerabilities is increasingly normalized. The conversation needs to switch from ‘what if we have an incident’ to ‘how do we handle it when we do’? These days, the public is prepared to accept that breaches occur. What they really care about is how well it is handled.
My number one takeaway is for organizations to take on the hacker mindset if even just a tiny bit. Become bug hunters! Look at things from the outside-in perspective! The first step of this journey then becomes obvious.
Go — look at your company’s website. Consider its products and applications. How would the general public report an issue? How easy is it to find the right contact information when starting with zero knowledge? Who are the recipients of these emails? Would they forward a critical security report to the right person internally or would they consider it a scam and delete it?
Finally, this is not a technical issue alone. It is important to prepare your Vulnerability Disclosure Policy as well as your Vulnerability Handling Policy with your legal teams, marketing, sales, PR and especially executive leadership. These groups can often be the most visible portions of the company and are more likely to receive random vulnerability reports. They don’t need to know the inner workings of the remediation and disclosure processes, but they at least need to know that they exist and to whom to pass the baton.
Adrian Sanabria is the co-founder of Savage Security. Adrian’s past experience includes thirteen years as a defender and consultant, building security programs, defending large financial organizations and performing penetration tests. He has spent far more time dealing with PCI than is healthy for an adult male of his age. Adrian learned the business side of the industry as a research analyst for 451 Research, working closely with vendors and investors. He is an outspoken researcher and doesn’t shy away from the truth or being proven wrong. Adrian loves to write about the industry, tell stories and still sees the glass as half full.
*** This is a Security Bloggers Network syndicated blog from The Ethical Hacker Network authored by Adrian Sanabria. Read the original post at: http://feedproxy.google.com/~r/eh-net/~3/Mg-NGQGfAn0/