The biggest challenge to data security is the sheer volume and pace of data growth. More so even than the shift from relational data to unstructured or the migration of data to the cloud. “Cloud scale” is usually used to refer to technical items like data center size and operations or networks and network capacity. But what’s really driving all this is data. Collecting it, crunching it, acting on it, which then creates more data. Some predictions set worldwide data creation at more than 40 zettabytes by 2020.
That’s a lot to keep up with. Especially for an IT staff that’s not growing nearly as fast as the data it has to protect.
Key to securing data is knowing where it is (discovery), knowing who accesses it (monitoring), identify what is “wrong” (analytics), and taking action when something wrong is identified (remediation). The challenge of any monitoring system is to do the job without affecting the performance of the system being monitored. Consider a soccer game, for example. The referee has to know all the rules, pay attention, and act immediately – all while staying out of the way.
It’s the same when securing data. To be effective, you have to know what you’re looking for, monitor activity constantly, find the dangerous activities and behaviors – and yes, not impact the production systems.
Outpacing Data Growth at “Cloud Scale”
The only way to win is for the monitoring system to be faster than the database systems it’s monitoring, no matter how much data there is. This is especially tricky when there are stringent controls and requirements for separation of duties and/or monitoring privileged users. In these cases, you cannot rely upon “native audit” from the systems themselves, since privileged users can bypass and/or change controls. In many cases, since they have “root access” they can also log in using application accounts (sometimes called service accounts) to mask their actions. The combination of this means that:
- A compensating control/mechanism needs to be deployed independent of privileged users
- Monitoring the activities of just the privileged user accounts isn’t sufficient. Application accounts must also be monitored.
And it is monitoring the data access activities of the application accounts that creates the “cloud scale” problem, since virtually every organization is pushing as much interaction as possible through applications.
Our Imperva SecureSphere solution is designed to be faster than the data stores it monitors, even when strict separation of duties and monitoring of access by application accounts is required. The result is no matter how many databases are being monitored, and no matter how fast those database writes are, SecureSphere is faster. No matter how often an organization needs to run a database classification scan, SecureSphere is on top of it.
Architecturally, this is accomplished using four fundamental design points:
- Separation of security monitoring from audit activity logging
- A back-end data store that is faster than the databases being monitored
- Agents that don’t degrade performance
- A “scale-out” architecture for SecureSphere itself
Security Separate from Audit
If security is the goal, then all activity must be monitored. But this doesn’t mean that all activity needs to be logged. Fundamental to SecureSphere is the ability to monitor activity and apply policy without needing to write a log of the activity itself. There are certain activities (e.g., activities by privileged user accounts) where you might want a log record of each and every action. However, you shouldn’t need to write a log record in order for policy to be applied.
Highly Scalable Back-end Data Store
When the monitoring system does need to write a log of activity, it needs to be “faster” than the system it is monitoring. Otherwise, the arrival rate of activity from the monitored databases will overwhelm the monitoring system. SecureSphere’s back-end data store was designed from scratch for this scalablity, and does not rely upon a relational database, which is notoriously slow from a “write activity” perspective.
Agents That Don’t Degrade Performance
Agents are admittedly a double-edged sword. They are required if strict separation of duties and “tamperproof” audit fidelity for privileged users are a must. But, since they run on the databases themselves, there is the danger they can degrade performance. SecureSphere agents are designed to be extremely lightweight. SecureSphere relegates the use of agents to data capture and policy enforcement, which doesn’t add unnecessary processing cycles. Parsing and policy evaluation is performed by the gateways.
Scale Out Architecture
SecureSphere is designed to “scale out” across both the data plane and the management plane. A clustered architecture, with built in load-balancing amongst components, allows for it to dynamically adapt to changes in the underlying database activity.
The Gold Standard
“Cloud scale” is the term for operating at the speed, efficiency and scale of the world’s largest cloud platforms. Imperva SecureSphere is designed from the ground up to monitor and secure database activity at this scale.
Learn more about best practices for securing data in your enterprise. Read our paper on seven keys to a secure data security solution.
This is a Security Bloggers Network syndicated blog post authored by Morgan Gerhart. Read the original post at: Blog | Imperva