How to Stay Out of the Security Shelfware Trap

The RSA Conference is just around the corner, and with it, one of the true spectacles of the security industry. If you visit the show floor of exhibitors you will find a seemingly endless sea of security vendors and products stretching in all directions, each one promising to be the critical missing piece to save you from the next attack. It can be exciting, quasi-educational, and more than a little mind numbing all at once.

But before being mesmerized by all the shiny UIs and dashboards, remember this somewhat sobering thought – much of what you see on the show floor is destined to be shelfware. In spite of all the innovation, more and more security technology is going underutilized or flat out unused. There are a variety of reasons why seemingly good technology can be doomed to collect dust, and knowing them is key if we want to stay out of bad investments. So with that in mind, let’s take a look at what’s driving shelfware and how you can spot the dead weight before investing in it.

Budgeting for the Human Cost

When considering a new security product, the pricing model, hardware costs, and support costs are all analyzed and negotiated in minute detail. But while the MBAs grind on the financial side of the product, most organizations fail to really plan for the equally important issue of how the security team will use the product on a daily basis. How much staff time is required? Whose skills are needed? How will it fit into existing operations? Most SecOps teams are already at capacity and virtually every new product requires input of time and effort from staff. If the team doesn’t have the time to support a product, or if it doesn’t align with how they actually work, then it is going to go unused.

Are You Putting Your Humans to Work for the Bots?

Operational overhead of security products has always been an issue, but the rise of machine learning and analytics has pushed things to an entirely new level. Network firewalls can take plenty of time to deploy and tune policies, but ultimately, they have the nice trait of boiling everything down to an “allow” or a “deny”.

However, as security has grown ever-more sophisticated, we increasingly find ourselves in grey areas. Security analytics, machine learning, and AI are transforming the art of threat detection, but they are rarely cut and dry. When an algorithm surfaces something anomalous or suspicious, it often falls to a human to understand the issue before action is taken. And this is the seed of a potentially disastrous situation. If we employ machines to generate tons of new errands for our humans, then we are going to be in a very bad spot. Suddenly the machine-human interaction starts to scale in the wrong direction. Yet this is exactly what is happening with many UBA, security analytics, advanced detection products today – and why many of them or destined for the shelf. They can generate lots of “interesting” data, but only if a person has the time to look at it.

Evaluate the Process, Not the Product

So we’ve seen some of the pitfalls, but what can we do about it? First, it is always important to evaluate a product with your own staff who will actually be using it. And this requires real stick time with the product. I’ve seen way too many product evaluations where the vendor SE is the only person that actually uses the product.

The SE runs a slick demo, sets up an evaluation, pops in every few weeks to interpret the results, and everything looks fine. But once he’s gone, it quickly becomes apparent that no one can use the system quite like he did. So it’s critically important that your team is actually doing the evaluating. It will take some time to get them trained, but it’s the only way you’ll know that it works for you. And if your team can’t find the time to learn the product, that should be a major red flag that the product is going to end up as shelfware.

Getting Next Generation With Adaptive Responses

Next, take a hard look to see where the product creates work and potentially removes work for your staff. Is it fully automated? How does it deal with uncertainty? Can it adapt based on context? How does it fit in with your other products and workflows? These are the types of questions that will reveal how much time and effort staff will need to spend to support the product.

For example, at Preempt we see an anomaly as the first step of a process. What else do we know that can help us understand the real risk? What is the user’s role and privileges? What do we know about the device and its location? What other events might be in play? What can we learn from the SSO or VPN solutions? But eventually, we will run through everything that we already know, so we have to ask new questions. For example, if the user is behaving strangely, we might challenge the user to verify their identity via  multi-factor authentication. Then with new information, we can make informed decisions on the next step.

If the user passes authentication, we can automatically close the incident and actively reduce the amount events an analyst has to review. If we need to take action, we can apply a series of gradual responses that align to the risk such as reducing user privileges, forcing a password reset, triggering various types of alerts, or blocking the connection outright. As the situation and risk evolve, so do the responses. Some may require security staff to be involved and some not. But the point is that the enrichment and response policy aligns with the way that the organization wants to work. And that is the critical component for avoiding shelfware.

It’s not enough just to be a good product – it has to be a part of your process.

If you’re at RSA April 16-19, be sure to stop by to see Preempt in action in booth 4804 in the North Hall. If you’d like to schedule time with one of our security experts to see a demo, contact us here

*** This is a Security Bloggers Network syndicated blog from Preempt Blog authored by Wade Williamson. Read the original post at:

Cloud Workload Resilience PulseMeter

Step 1 of 8

How do you define cloud resiliency for cloud workloads? (Select 3)(Required)