Posted under: Research and Analysis
After over 25 years of the modern IT security industry, breaches still happen at an alarming rate. Yes, that’s pretty obvious, but clearly disappointing given the billions spent every year to remedy the situation. Over the past decade, the mainstays of security controls have undergone the next generation treatment, initially with the firewalls and more recently with endpoint security. New analytical techniques have been wielded to examine logs of the infrastructure in a more sophisticated fashion.
Yet, it seems that the industry continues to miss the point. The objective of nearly every hacking campaign remains to steal data. So why the focus on better infrastructure security controls and more impactful analytics of said infrastructure? Mostly because data security is hard. The harder the task, the less likely organizations overwhelmed with other stuff to do will have the fortitude to make the sustainable change necessary.
To be clear, we totally understand the need to live to fight another day. That’s the security person’s ethos, and it has to be. There are devices to clean up, incidents to respond to, reports to write and new architectures to figure out. The idea of tackling something nebulous like data security, with no obvious answer remains a bridge too far.
Or does it? We believe the time has come to revisit the broader concept of data security and to utilize many of the new techniques pioneered for the infrastructure to increasingly handle the insider threat where it starts. And that’s the data. Thus in this new series, Protecting What Matters: Introducing Data Guardrails and Behavioral Analytics, we will introduce some new practices and highlight new approaches to protecting the data.
Before we get started, let’s send a shout-out to Box for agreeing to license the content when we finish up the series. Without clients like Box, who understand the need for forward-looking research to tell you where things are going, not reports telling you where they’ve been.
Understanding Insider Risk
While security professionals like to throw around the term “insider threat” it’s often nebulously defined. In reality, it includes multiple categories, including external threats that leverage insider access. We believe that to truly address a risk you need to understand it in the first place (I know, call us crazy). Breaking down the insider threat a level deeper, let’s categorize the typical categories of risk:
- Accidental misuse: In this scenario, the insider doesn’t do anything malicious, rather makes a mistake resulting in data loss. For example, a customer service rep could respond to an email sent by a customer that includes private account info. It’s not like the rep is trying to violate policy, but they didn’t take the time to look at the message to clear out any private data.
- Tricked into unwanted actions: Employees are human, and they can be duped into doing the wrong thing. Phishing is a great example of this. Or providing access to a folder based on a call from someone impersonating an employee. Again, this isn’t a malicious act, but it can result in a breach all the same.
- Malicious misuse: Sometimes you have to deal with the reality of a malicious insider intentionally stealing data. With the first two categories, the person isn’t trying to mask their behavior. In this situation, they are intentionally obfuscating and that means you’ll need to employ different tactics to prevent and detect the activity.
- Account takeover: This category reflects the reality described above that once an external adversary has presence on a device, they are an insider and with the compromised device and account, they have access to lots of critical data.
We also need to look at these in the context of adversaries so you can properly align your security architecture. So who are the main adversaries trying to access your stuff? A course-level categorization would be as follows: unsophisticated (using widely available tools), organized crime, competitors, state-sponsored, and actual insiders. Depending on your most likely adversary and their typical tactics, you can design a set of controls to more effectively protect your data.
For example, an organized crime faction looks to access data related to banking or personal information to leverage in identity theft. Whereas a competitor would look for information regarding product plans or pricing strategies. You can (and should) design your data protection strategy with these likely adversaries in mind to help you prioritize what to protect and how to protect it.
Now that we understand the adversaries and can infer their primary tactics, we have a better understanding of the mission of the adversary. In turn, we can figure out a data security architecture to minimize risk and optimally prevent any kind of data loss. But it requires that we use different tactics than would normally be considered data security.
A New Way to Look at Data Security
If you surveyed security professionals and asked what data security means to them, they’d likely say either encryption or data loss prevention (DLP). When all you have is a hammer, everything looks like a nail and the hammers we have tended to be one of these two solutions. Just because we want to expand the perspective a bit, which doesn’t mean that DLP and encryption no longer have a role in data protection. Of course they do. But we can supplement these controls with some new tactics.
- Data Guardrails: We’ve defined Guardrails as a means to enforce best practices without slowing down or impacting typical operations. Typically used within the context of cloud security (like, er, DisruptOps), a data guardrail would allow data to be used in certain ways while blocking non-authorized usage. To bust out an old network security term, think about this like “default-deny” for your data. You define the set of acceptable practices and don’t allow anything else.
- Data Behavioral Analytics: Many of you likely have heard of UBA (User Behavioral Analytics), where all of a user’s activity is profiled and you look for anomalous activities that could indicate one of the insider risk categories enumerated above. What if you turned UBA inside-out and focused on the data? Using similar analytics, you could profile the usage of all of the data in your environment and then look for non-normal patterns that warrant investigation. We’ll call this DataBA, since your database administrators may be a little peeved if we horned in on their job title.
In the next post, we’ll dig more into these new concepts of Data Guardrails and DataBA to illuminate both the approach and the pitfalls you’ll face.
*** This is a Security Bloggers Network syndicated blog from Securosis Blog authored by [email protected] (Securosis). Read the original post at: http://securosis.com/blog/introducing-data-guardrails-and-behavioral-analytics-understand-the-mission