Posted under: Research and Analysis
To be masters of the obvious, it’s not getting any easier to detect attacks. Not that it was ever really easy, but at least you knew what tactics the adversaries would use and you’d have a general idea of where they would end up because you knew where your important data was and largely had a single type of device that accessed it – the PC. Hard to believe we’re longing for the days of early PCs and centralized data repositories.
That is not today’s world. You face professional adversaries (and possibly nation-states) that use agile methods to develop and test attacks. They have means of obfuscating who they are and what they are trying to do to further complicate detection. They prey upon perpetually gullible employees who click on anything to gain a foothold in your environment. Further complicating matters is the inexorable march towards cloud services, which moves both unstructured content to cloud storage, outsources back office functions to a variety of service providers, and moves significant portions of your technology environment to the public cloud. And these movements are accelerating, seemingly exponentially.
There has always been a playbook to deal with attackers when we knew what they were trying to do. Whether you effectively executed on that playbook notwithstanding, the fundamentals were understood. As we mentioned in the Future of Security series, the old ways don’t work anymore and that puts practitioners behind the 8 ball. The playbook has changed and the old security architectures are rapidly becoming obsolete. For instance, it’s increasingly difficult to insert inspection bottlenecks into your cloud environment without adversely impacting the efficiency of the technology stack. Moreover, sophisticated adversaries have the ability to use exploits that aren’t going to be caught by traditional assessment and detection technologies (even if they don’t have to use it frequently).
This means we need a better way to assess the security posture of your organization, detect attacks, and determine applicable methods to work around and eventually remediate exposures in your environment. As much as the industry whinges about adversary innovation, the security industry has made progress in improving your ability to assess and detect these attacks. We’ve written a lot about threat intelligence and security analytics over the past few years. Those are the cornerstone technologies to deal with adversary’s improved capabilities.
But these technologies and capabilities cannot stand alone. Just pumping some threat intel into your SIEM is not going to help you understand the contextual relevance of the information. And doing advanced analytics on the scads of security data you collect is not enough either because you may be missing a totally new attack vector.
Ultimately what you need is a better way to assess your organizational security posture, determine when you are under attack, and figure out how to make the pain stop. This involves not just technology, but also process changes and a clear understanding of how your technology infrastructure is evolving towards the cloud. This is no longer just assessment or analytics, it’s something bigger. It’s what we are now calling Security Decision Support (SDS). Snazzy, huh?
In this blog series “Evolving to Security Decision Support”, we’ll delve into these concepts and show you how to gain both the visibility and context to understand what you have to do and why. Security Decision Support provides a means of prioritizing the thousands of things you can do allowing you to hone in on the few things that you must do.
As with all of Securosis’ research developed using our Totally Transparent methodology, we don’t mention specific vendors or products rather focusing on the solution architecture and decision points that will help you practically leverage our research. Yet, we still have to pay the bills, so we’ll take a moment to thank Tenable, who has agreed to license the content when it’s complete.
Visibility in the Olden Days
Securing pretty much anything starts with visibility. You can’t manage what you can’t see, and a zillion other overused adages all illustrate the same point. If you don’t know what’s on your network and where your critical data is, you don’t have much of a chance to protect it.
In the olden days, you know back in the early 2000s, visibility was pretty straight forward. First you had your data on mainframes in the data center. Even when you started using LANs to connect everything, the data was still over the raised floor or in a pretty simple email system. Early client/server systems started complicating things a bit, but everything was still on networks you controlled in data centers that you had the keys to. You could scan your address space and figure out where everything was and see what vulnerabilities needed to be dealt with.
That worked pretty well for a long time. There were issues of scale and the need/desire to scan higher into the technology stack, so you started seeing first stand-alone and then integrated application scanners. Once rogue devices started appearing on your network, it was no longer sufficient to just scan your address space every couple of weeks, so passive network monitoring allowed you to watch the traffic and flag (and assess) unknown devices.
Those were the good old days, when things were relatively simple. OK, maybe not simple, but you could size the problem. That’s no longer the case.
We use a pretty funny meme in many of our presentations. It shows a man from the 1870s remembering blissfully the good old days when they knew where their data was. That image always gets a lot of laughs from the audience. But it’s laughter brought on by pain because everyone on the room knows it’s true. Nowadays you don’t really know where your data is, and that really complicates your capability to determine the security posture of the systems with access to it.
These challenges are a direct result of a number of key technology innovations:
- SaaS: Securosis talks about the fact that SaaS is the New Back Office, and that has pretty drastic ramifications on visibility. Many organizations have deployed a CASB just to figure out what SaaS services are in use because it’s not like your business folks come an ask permission to use a business-oriented service. This isn’t a problem that’s going away. If anything, more of your business processes will be moving to SaaS.
- IaaS: Speaking of cloudy stuff, you have teams that are using Infrastructure as a Service (IaaS), either moving existing systems out of your data centers or building new systems for the cloud. IaaS really messes with how you assess your environment. Scanning is a lot harder and some of the “servers” (now called instances) will only live for a few hours. The network addressing is different and you can’t really implement taps to see the traffic. It’s a different world for sure, and it’s one where you are pretty much blind.
- Containers: Another foundational technology to allow more portability and flexibility in how you build and deploy application components is containers. Without going into any detail about why containers are cool, suffice it to say that your developers are likely working with them as they architect new applications, especially in the cloud. But these containers provide some challenges in visibility and security because they are short-lived (they spin up when you need them), self-contained (usually not externally addressable) and don’t provide access for a traditional scan. Thus containers pretty much break your existing discovery and assessment processes.
- Mobility: It seems kind of old hat to even be mentioning the reality that you have critical data on smart devices (phones and tablets), but it expands your attack surface and makes it hard to really understand both where your data is and how those devices are configured.
- IoT: A little further into the horizon is the idea of the Internet of Things (IoT). Some would argue it’s here today and with the number of sensors being deployed and smart systems that are network connected, those folks may be right. Regardless, if you look even just a year or two into the future, you can bet there will be a lot more network connected devices accessing your data and expanding your attack surface. And that means that you’ll need to be able to find them and assess them.
And we are just getting started. It won’t be long before the next discontinuous innovation makes it harder to figure out where critical data resides and what’s happening with it. To put a bow on the discussion of the challenges you face, we’ll talk about some reasonable bets to make. We’re pretty confident that there will be more cloud in use tomorrow than there is today. We’re equally confident that there will be more devices accessing your stuff tomorrow than today. And that’s pretty much all you need to know to understand the extent and magnitude of the problem.
To again be masters of the obvious, it’s hard to be a security professional nowadays. We get it. Yet crawling up into the fetal position on your data center floor isn’t really an option. First of all, you probably don’t even have a data center anymore. And if you do, it’s either being repurposed as warehouse space or has been sold off to a cloud provider. Secondly, that won’t really solve any problems.
So what to do? Remember that you can’t manage or protect what you can’t see, so first we need to focus on visibility as the first step on the path to Security Decision Support. Visibility across the enterprise. Wherever your data resides. On whatever platform. That means discovery and assessment of all your stuff.
We’re pretty sure you haven’t been able to totally shut off your data centers and move everything to SaaS and IaaS (even though you may want to), so that means you’ll need to make sure you aren’t missing anything within your traditional infrastructure. Thus, you’ll need to continue your existing vulnerability management program.
- Network, security, databases and systems: You already scan your network and security devices, all of the servers you control and probably your databases as well (thanks compliance mandates!), so you’ll keep doing this. Hopefully you’ve been evolving your vulnerability management environment and have some means of prioritizing all of the stuff in your environment.
- Applications: You are also likely scanning your web applications as well. That’s a good thing. Keep doing that. And working with the developers to ensure they are fixing the issues you find before something is deployed to millions of customers. Obviously as the developers continue to adapt agile methods of building software, you’ll still need to evangelize the need to identify the issues with the application stacks and given the velocity of software changes, fix those issues faster.
That’s the stuff that you should already be doing. Maybe not as well as you should (there is always room for improvement, right?), but at least for compliance purposes you are already doing something. Where it gets interesting is how discovery and assessment works for a lot of these new environments and innovations that you need to grapple with. Let’s look at the innovations we described above and get a sense for how things change in this new world.
As we mentioned, many of you have deployed a CASB (cloud access security broker) to look at your egress network traffic and figure out what SaaS services are actually in use. It’s always entertaining to hear the anecdotes of how some vendors will ask a customer how many SaaS services they think are being used, and they say maybe a couple dozen. And then the vendor (with great dramatic effect) drops the report on the deck and it’s closer to 1500.
To be clear, you don’t have to use a purpose-built device or service to figure out in-use SaaS as many of the secure web gateways offer this kind of visibility, as well as DLP solutions (focused on controlling exfiltration). So one method of discovery is looking at egress traffic.
Another means of discovery and assessment is through the SaaS provider’s API (application programming interface). The more mature SaaS companies understand that visibility is a problem, so they offer reasonably granularity about usage and activity via their API. You can pull down this information and integrate it with your other security data for analysis. We’ll dig into the analysis aspect of Security Decision Support in the next post.
As your organization moves existing systems and builds new applications for the cloud, you’ll need to take a more proactive measure to get a sense of what resources actually live in the cloud. Unlike SaaS, where someone is presumably connecting to a service from inside your organization (and you can presumably see that), an egress filter isn’t going to provide much detail about what lies within the public cloud service.
In this case, the API really is your friend. Any tools that focus on visibility will need to poll the cloud provider’s API to learn what systems are running in an environment and then to assess them. One mention of caution are the API limitations of some cloud providers. You cannot make infinite API calls to any cloud provider (for obvious reasons), so you’ll need to build your IaaS environment with this in mind.
We favor cloud architectures that use multiple accounts per application for lots of reasons. Overcoming API limitations is one, as well as minimizing blast radius of an attack using better functional isolation between applications. Yet, that’s a much larger discussion for a different day. But you can check out our recent video about this very concept, if you are so inclined.
Befriend the Accountants
A point we’ll make about cloud services is general should be familiar from many contexts. Follow the money. For both SaaS and IaaS, the only thing we can be sure of is that someone is getting paid for any services that you use. And that means that whoever pays the bill should be able to let you know what services are in use and for what.
So another recommendation we have is to make sure that you are friendly with the accounting team. Take them out to lunch from time to time. Support their charitable causes. Whatever it takes to keep them on your side and responsive to your requests for accounting records for cloud services.
To be clear, a report from accounting is not a replacement for pulling information from APIs or monitoring your egress traffic. Attackers move fast and can do a lot of damage in the time it takes for the provider to bill you and then for accounting to receive the bill and process it. So you are likely 4 – 6 weeks behind what’s really happening. So you’ll want to use this kind of information to verify what you should already know. And to identify the stuff that you should know about, but maybe don’t.
Given that containers encapsulate micro-services which are not usually persistent and cannot really be accessed or scanned from external entities (like a vuln scanner), the model of a separate capability to discover and assess these containers doesn’t really work. Thus, you’ll have to build discovery and assessment into the container system. First, you’ll want to make sure that any of the containers you build are not vulnerable, so you’ll integrate assessment into the container build process. Thus, any container spun up will be built using an image that is not vulnerable.
Then you’ll also want to track the usage of the containers and make sure that nothing drifts, which means inserting some kind of technology (an agent or API call) into the build/deploy as containers spin up. That technology will track the container through its lifecycle (and report back to a central repository) and watch for signs of an attack on the component. Let’s reiterate the fact that this isn’t something you can bolt on after the fact (like most of security). So right when you are done buying pizza for the accounting team, you may want to have a happy hour with the developers. Without their participation, you’ll have precious little visibility into your container environment.
It’s been a while since you could stick your head in the sand and hope that mobile devices were a passing fad. Nowadays they are full participants in your IT environment and there are new, innovative applications being rolled out to gain business advantage from the flexibility of these devices. But with ubiquity of a class of technology, lots of solutions emerge to address the common problems.
In terms of visibility and assessment of mobile devices, there are dozens of solutions (even after consolidation). In order to access corporate data or install purpose-built mobile apps, the device will need to be registered with the corporations mobile device management (MDM) environment. These platforms can provide an inventory of not just the device, but what is installed on the device. More sophisticated offerings now have the capability to block certain apps from running or stop the device from accessing some networks, based on the configuration and assessment.
So that’s the good news. Where there is still work to do is in integrating that information into the rest of the Security Decision Support stack. You’ll want to be able to pull the telemetry from the MDM environment and use that as part of your security analytics strategy. For example, figuring out that a certain person’s mobile device is being used to access cloud data stores they aren’t authorized to look at, while that same user’s computer is doing recon on the finance network could be an indication of a successful compromise. You’d like your analytics environment to connect the two data points, and highlights the importance of enterprise visibility. But let’s not get ahead of ourselves yet, we’ll get into analytics in the next post.
The thing about many IoT devices is that they aren’t your standard, run of the mill, PC or mobile device. They likely don’t have an API that you can poll to figure out what’s going on, nor does it allow you to install an agent to monitor activity. Additionally, these devices can appear on many network segments that may not be as highly monitored or protected, such as the shop floor or security video network.
Figuring out the presence of these devices, assessing security, and then looking for potential misuse requires a different approach, one that is largely passive in nature. So your best bet will be to monitor those networks, profile the devices on each network, baseline their typical traffic patterns and then look for situations where the devices aren’t acting normally. Yet, it is a little more challenging than collecting a bunch of NetFlow records on a shop floor network. These IoT devices may use proprietary, non-standard protocols further complicating the discovery and assessment process. As you factor these types of devices into your Security Decision Support strategy, you’ll need to weigh the complexity of identifying and assessing these devices against the risk of attack.
Of course, we’ll need to put a few caveats around these concepts. First, emerging technologies are moving targets. Let’s just take IaaS as an example. The Cloud providers are rapidly introducing APIs and other mechanisms to provide a view into what’s running in their environments. You could make similar points for every class of technology. It seems most device makers (regardless of the type of device) realize that customers want to be able to manage their technology as part of a bigger system, so many (but not all, sigh) are providing better access to its innards in more flexible ways. That’s the kind of progress you like to see.
Yet, tomorrow’s promise doesn’t solve today’s problem. You have to build a process and implement tooling based on what’s available today. Thus, for many of these emerging environments, you’ll want to build a periodic revisitation of your strategy into your SDS process, similar to how you (should) revisit your malware detection approaches periodically.
Yes, revisiting your enterprise visibility approaches can be time consuming, and expensive if something needs to change. Reversing course on decisions you made over the past year can be frustrating. But that’s the world you live in, and resisting it will just make you cranky. Or more accurately, more cranky. If you expect to revisit all of these decisions and at times toss some tools and embrace others, it makes it much easier to handle. Even more importantly, managing expectations on the part of management that this could (or more likely would) be an eventuality, it will go a long ways to maintaining your current employment status.
To summarize, the first step toward Security Decision Support is enterprise visibility and understanding the exposure of those assets and data, wherever it is. Next we’ll dig into figuring out what’s really at risk by integrating the external view of the security world (threat intel) and doing more sophisticated analytics on both the internal security data you collect.
*** This is a Security Bloggers Network syndicated blog from Securosis Blog authored by firstname.lastname@example.org (Securosis). Read the original post at: http://securosis.com/blog/evolving-to-security-decision-support-visibility-is-job-1