SecMon State of the Union: Refreshing Requirements

Posted under: Research and Analysis

Now that you understand the use cases for security monitoring, the next step in our journey is to translate those use cases to requirements for your strategic security monitoring platform. In other words, now that you have an idea of the problem(s) you need to solve, what capabilities do you need to address the use cases? And part of that discussion is inevitably what you don’t get from your existing security monitoring approach, since this research wouldn’t be very interesting if you were all peachy with your existing tools.

Visibility

We made the case that Visibility is Job #1 in our Security Decision Support series. Maintaining sufficient visibility across all of the moving pieces in your environment is getting harder. So when we boil it down to a set of requirements, it looks like this:

  • Aggregate existing security data: We could have called this requirement same as it ever was, since all of your security controls generate a bunch of data that you need to collect. Kind of like the stuff you were gathering in the early days of SEM (security event management) or log management 15 years ago. Given all of the other things on your plate, what you don’t want is to have to worry about integrating your security devices or figuring out how to scale the solution given the size of your environment. To be clear security data aggregation has commoditized, so this is really table stakes for whatever solution you consider.
  • Data management: Amazingly enough, when you aggregate a bunch of security data, you need to manage it. So data management is still a thing. We don’t need to go back to SIEM 101, but aggregating, normalizing, reducing, and archiving security data is a core function of any security monitoring platform, regardless of whether it started life as SIEM or a security analytics product. One thing to consider (that we’ll dig into more when talking procurement strategies) is the cost of storage, since some emerging cloud-based pricing models can be punitive when you significantly increase the amount of security data collected.
  • Embracing New Data Sources: In the old days the complaint was that vendors did not support all the devices (security, networking and computing) in the organization. As described above, that’s less of an issue now. But consuming and integrating cloud monitoring, threat intelligence, business context data (like asset information or user profiles), and non-syslog events — all drive a clear need for streamlined integration to get value from additional data faster.

Seeing into the Cloud

When considering the future requirements of a security monitoring platform, you need to understand how it’s going to track what’s happening in the cloud, since it seems the cloud is here to stay (and yes, that was facetious). You start with API support, since this is the lingua franca of the cloud. So any platform you choose must be able make API calls to the service(s) that you use and/or pull information and alerts from a CASB (Cloud Access Security Broker) used to track use of SaaS within your organization.

You’ll also want to understand the architecture involved in gathering data from multiple cloud sources. You definitely use multiple SaaS services and likely have many IaaS (Infrastructure as a Service) accounts, possibly within multiple providers, to consider. All of these environments create data that needs to be analyzed for security impact, so you should define a standard cloud logging and monitoring approach and likely centralizes the aggregation of the cloud security data. You also should consider how the cloud monitoring integrates with your on-prem solution. For more detail on this, check out our paper on Monitoring the Hybrid Cloud.

For specific considerations of the different cloud environments:

  • Private cloud/virtualized data center: There are differences between monitoring your existing data center and a highly virtualized environment. You will be able to tap the physical network within your data center for additional visibility. But for the abstracted layer above that — which contains virtualized networks, servers, and storage — you need proper access and instrumentation in the cloud environment to see what happens within virtual devices. You can also route network traffic within your private cloud through an inspection point, but the cost in architectural flexibility is substantial. The good news is that security monitoring platforms (mostly) have the ability to monitor within virtual environments by installing sensors within the private cloud.
  • IaaS: The biggest and most obvious challenge in monitoring IaaS is the reduced visibility because you don’t control the physical stack. You are largely restricted to logs provided by your cloud service provider. IaaS vendors abstract the network, impacting your ability to see network traffic and/or capture network packets. You can run all network traffic through a cloud-based choke point for collection, regaining a faint taste of the visibility inside your own data center, but that sacrifices much of the architectural flexibility inherent to cloud. You also need to figure out where to aggregate and analyze collected logs from both the cloud service and individual instances. These decisions depend on a number of factors — including where your technology stacks run, the kinds of analysis to perform, and what expertise you have available on staff.
  • SaaS: Basically, you see what your SaaS provider shows you, and not much else. Most SaaS vendors provide logs to pull into your security monitoring environment. They don’t provide visibility into the SaaS vendor’s technology stack, but you will be able to track what your employees are doing within the service — including administrative changes, record modifications, login history, and increasingly application activity. You can also pull information from a CASB that is polling SaaS APIs and analyzing egress web logs for additional detail.

Threat Detection

The key to threat detection in this new world is to be able to detect both attacks you know about (rules-based), attacks you haven’t seen yet but someone else has (threat intelligence driven), and unknown attacks that cause anomalous activity on the part of your users or devices (security analytics). The patterns you are trying to detect could be pretty much anything, including command and control, fraud, system misuse, malicious insiders, reconnaissance, or even data exfiltration. So there is no lack of stuff to look for, the question is what do you need to detect it?

  • Rules: You can’t ditch your rules, so don’t even think about it. Actually, you can but you’ll likely miss a bunch of attacks that you should catch because you know the attack pattern. The behavioral models are focused on the stuff you don’t know about, and aren’t optimized to find known bad stuff. Similar to endpoint protection, rules (signatures) are not an either/or proposition. If you already know about an attack, shame on you if you miss it.
  • Threat Intelligence: For attacks you haven’t seen yet, in the old days you’d be out of luck. But today there is a decent chance someone else has been attacked by it, and that’s where threat intelligence comes into play. You pump a threat feed into your security monitoring platform since someone else has seen the attack, and you’ll be ready when it comes for you. You want to make sure you can categorize threat intel alerts differently, since you’ll want to track the effectiveness of the information you get from various threat feeds to track value and make sure you aren’t increasing alert noise.
  • Security Analytics: The final approach you need to consider is based on advanced math. You’ll hear terms like security big data, machine learning, or just the generic “it’s fancy math, trust us” to describe these techniques. Regardless of the description, security analytics involves profiling devices, networks, and applications to building a baseline of normal activity and then looking for deviations from that profile, which would indicate malicious activity. It’s very difficult to discern the differences between one analytics approach and another, so understanding what will work for your organization requires actually trying them. We’ll discuss procurement in the next post.

After a few years of using security monitoring technology, hopefully at this point you realize this isn’t (and likely won’t ever be) a set it and forget it situation. You’ll need to keep the system current and tune it accordingly because not only are the adversaries constantly changing and evolving their tactics, but your environment is constantly changing requiring ongoing maintenance.

So you’ll want to build into your operational processes a learning and tuning step, so you improve detection process based on the false positives (alerts that weren’t real attacks) and negatives (attacks that you missed). If you want to be successful in detecting attacks a feedback loop is critical.

Forensics and Response

Obviously you cannot prevent every attack, and even if you do fire an alert about a specific attack, your Security Ops team may miss it, which has been known to happen. Thus the security monitoring platform will also play a major role in the incident response process. The challenge is less about gathering the data or trying to link it together, but instead how to make sense of the information at your disposal in a structured fashion to accelerate identification of the root cause of any attack. As we discussed in Future of Security Operations paper, many aspects of the response process can be automated, so ensuring support for that is key.

The key capabilities include:

  • Search: Most modern attacks are not limited to a single machine, so you’ll need to figure out quickly how many devices have been attacked as part of a broader campaign. Some of that takes place during validation/triage as the analyst pivots, but figuring out the breadth of an attack requires them to search the entire environment for indicators of the attack, typically via metadata.
    • Natural Language/Cognitive Search: An emerging search capability is the use of natural language search terms instead of arcane Boolean operators. This helps less sophisticated analysts be more productive without having to learn a new language.
    • Retrospective Search: Responders often have a sense of what caused the attack, so they should be able to search through historical security data enables them to find activity which might not have triggered an alert at the time, possibly because it wasn’t then a known attack.
  • Case Management: The objective is to make each analyst as effective and efficient as possible, so you should have a place to store all information related to an incident. This includes enrichment data from threat intel and other artifacts gathered during validation. This should also feed into a broader incident response platform if the forensics/response team uses one.
  • Visualization: To reliably and quickly validate an alert, it is very helpful to see a timeline of all activity related to the incident. That way you can see what actually happened across many devices and get a broader understanding about the depth of the issue. An analyst can take a quick look at the timeline and figure out what requires further investigation. Visualization can present all sorts of information, but be wary of overcomplicating the console. It is definitely possible to present too much information.
  • Drill Down: Once an analyst has figured out which activity in the timeline raises concerns, they drill into it. At each stage of the attack, they’ll find other things to investigate, so being able to jump between events and devices helps identify the root cause of attacks quickly. There is also a decision to be made regarding how much data to collect and have at the ready. Obviously the more granular the available telemetry, the more accurate the validation and root cause analysis. But with increasingly granular metadata available you might not need full capture of either networks or endpoints.
  • Workflows and Automation: The more structured you can make your response function, the better a shot junior analysts have of finding the root cause of an attack, and figuring out how to contain and remediate it, and given the skills gap every organization faces every bit of assistance helps. Response playbooks for a variety of different kinds of attacks within the security monitoring environment can help standardize and structure the response process. Additionally, being able to integrate with automation platforms (now called SOAR – Security Orchestration, Automation and Response) to streamline response — at least the initial phases — dramatically improves effectiveness.
  • Integration with malware tools: During validation you will also want to check whether an executed file is actually malware. The security monitoring platform can store executables and integrate with network-based sandboxes to explode and analyze files — to figure out both whether a file is malicious and what it does. This provides context for eventual containment and remediation. Ideally this integration will be native, and enable you to select an executable within the response console to send to the sandbox, with the verdict and report filed with the case.
  • Hunting: Threat hunting has come into vogue over the past few years, as mature organizations decided they no longer wanted to be at the mercy of security monitoring, desiring a more active role finding attackers. So their more accomplished analysts started looking for trouble. They went hunting for adversaries rather than waiting for security monitors to report attacks in progress. Analysts will need to figure out what behaviors and activities to hunt for, then look for them in your environment. The hunter starts with a hypothesis, and then runs through scenarios to either prove or disprove it. If the hunter finds suspicious activity more traditional response functions such as searching, drilling down into available security data, and pivoting to other devices, all help to follow the trail.

Compliance and Reporting

As we’ve mentioned, compliance reporting is extremely resource intensive and doesn’t really add a lot of value to the organization. But if you screw it up, it can cost a lot of money in fines, etc. So the idea is to streamline the process of substantiating your controls to the greatest degree possible, so you can get the reports done as quickly as possible and get back to real work.

This distinctly unsexy requirement seems old hat, but you don’t want to go back to preparing for your assessments by wading through reams of log printouts and assembling data in Excel, do you? You want your security monitoring tool to ship with dozens of reports to showing the controls you have in place and mapping them to compliance requirements, so you don’t need to do it manually.

You’ll want to be able to customize the reports that come with the tool, as well as develop your own reports, when needed.

Scalability

Over the past few years, as you’ve added mobile and cloud services and possibly endpoint data to your security data collection, you are dealing with a lot more data, and there are no signs of the increasing volume of security data abating. So you need to plan for scale.

  • Security Data: Does your existing security monitoring platform keep pace with the increase in data and continue to perform admirably? This is where the underlying architecture of the solution comes into play. Is the data aggregated on an appliance, which can get bogged down at high insertion rates? Does the offering leverage a cloud-based architecture, so you don’t know how it scales, it just does? Is it a combination of both to support your on-prem assets and your cloud-native technology stacks? Architecture is driven by your needs, just make sure that the solution can double in size within a reasonable timeframe without requiring a forklift upgrade, given that the only sure thing in technology is that you will be dealing with more data sooner than you expect.
  • Pricing scalability: Security monitoring can be priced based on events per second, which tends to be the historical way of doing pricing (when all of the data was collected by sensors sitting on a network). You increasingly see pricing models based on the volume of data aggregated per day. Either way, you have a disincentive to collect more data and that’s a problem when visibility of a sprawling IT environment is critical to your ability to detect attacks. So consider how the monitoring platform scales from a pricing standpoint as well.

Intangibles

As much as we’d like to rely only on technical requirements to buy the best security monitoring platform, there are always other factors that come into play.

  • Integration with broader product line: This is the age old discussion of big security company, or focused upstart? Clearly from an innovation standpoint, you see smaller companies move faster, but many larger organizations are actively trying to reduce the number of vendors they are dealing with. A key question is can you get leverage by adopting a security monitoring platfrom from a big vendor that provides various other solutions in your environment? One thing to consider is to make sure the integration really exists. We don’t say that facetiously, as just because a vendor acquired or has an OEM agreement to provide technology that doesn’t mean the solutions have been integrated much beyond the procurement process. That’ll be something to confirm during the PoC.
  • Ease of Management: How easy is it to manage your platform? To archive older data? To roll out new collectors, both on-prem and in the cloud? How about adding new use cases or customizing correlation rules? Are policy management screens easy to use, or do they consist of 500 checkboxes you don’t fully understand? Make sure you have good answers to these questions during the PoC, so you make sure the new tool doesn’t create more work.
  • Vendor viability: Have you ever bought a product from a smaller innovative company that doesn’t make it, for whatever reason, and you were left holding the bag? Of course you have, so keep in mind that vendor fortunes can change dramatically and quickly. Your innovative small company may get acquired by a big IT shop and run into the ground. Conversely many larger security companies have struggled to scale (and show Wall Street growth and profits), forcing them to cut resources and miss huge innovations in the market. So buying from the big company isn’t always a safe bet either. Thus, always consider every potential vendor’s ongoing viability and ability to deliver on its roadmap to ensure it lines up with what you need going forward.

Now that you have an idea about what you need to look for in a security monitoring solution, it’s time to talk to vendors and figure out what to buy, so we’ll wrap up the series with a deep dive into the procurement process, which is how you figure out what’s real and what’s not – before you write a (rather large) check.

– Mike Rothman
(0) Comments
Subscribe to our daily email digest



*** This is a Security Bloggers Network syndicated blog from Securosis Blog authored by info@securosis.com (Securosis). Read the original post at: http://securosis.com/blog/secmon-state-of-the-union-refreshing-requirements