That’s a provocative title, and deliberately so. The point is that “more” security doesn’t always have the intended effect. Yes, we all know about “defense in depth” and that a single security solution isn’t the answer. That’s still true—I’m talking about unintended secondary effects of adding security.
Computing terminology features many real-world metaphors: desktops, folders, even firewalls; and it may be helpful looking at real-world examples of “more” security that aren’t always desirable. An obvious example is a deadbolt on a screen door: it won’t slow down a thief at all.
In my youth, I spent a lot of time at a friend’s country farm. At some point, I asked why they locked the front door at night, but not when leaving the house. “Because if someone wants to break in while we aren’t here, the lock isn’t going to stop them—all it will mean is that besides getting robbed, we’ll have to repair the door. Plus it being unlocked might make them think we’re here. At night, if someone kicks the door in, I’ll wake up and do something about it.”
A final example: another friend grew up in a small town near a prison, and told me that they left their cars unlocked with keys in ignition. Why? Because if someone escaped from the prison and came upon their house, they’d rather he just take the car, rather than coming inside and “asking” for the keys!
How do these examples apply to computing? Adding computer security layers has the same potential for unintended consequences. While these can be obvious—decreased performance being the most common problem (who hasn’t found an end-user machine where anti-virus was disabled “because it made it too slow”?)—they’re more often political/human.
Recent articles about the National Institute of Standards and Technology (NIST) rescinding some earlier password recommendations illustrate this. While there were solid reasons for the recommendations, they turned out not to be useful in practice. End-users simply aren’t going to completely change their password every ninety days: instead, they’re going to cycle through Hamster1, Hamster2, etc.
Working for an encryption company, of course I look at this in terms of data protection. Like anti-virus, encryption is not “free”—it has performance impacts. But worse is confusion often surrounding where data should get encrypted. Effort and effectiveness of encryption are inversely related: the easiest solutions are the least effective, and vice versa.
Consider some options for where to encrypt data:
- In the hardware or filesystem
- In the database
- In the application
As you traverse this hierarchy from hardware- to application-level, implementation difficulty increases—but so does security.
The appeal of hardware-level encryption is that it’s easy: the storage administrator enables it, and it just works. None of the layers above will even be aware of it: it’s completely transparent. On the other hand, the only things that hardware encryption protects against are:
- Stolen physical disk drives (hence its heavy use in laptops)
- Accidental shared disk access (for SANs and mainframe disks, where such things are possible)
- Unprotected backups (if the backup solution is able to back data up in its encrypted state)
- Recovered latent data from old, discarded drives (avoiding shredding/data security erase)
With hardware-level encryption, any operating system, file, database, or application-level attack will still succeed, because those layers aren’t protected.
Jumping to per-file encryption, implementation is relatively easy: appropriate files are identified and encrypted, and controls allow only authorized users access to decryption. This approach provides all of the benefits of hardware- or filesystem-level encryption, plus it allows DBAs and storage administrators to perform their jobs—moving data between disks, performing backups, etc.—without giving them access to the cleartext. This requires utilities that use special access methods to bypass the operating system decryption. Since file-level encryption requires at least some manual decision-making about which files to protect, this is a bit higher effort than hardware or filesystem encryption, but still quite easy.
Database-level encryption protects against the same threats as disk- or file-level encryption, plus it controls any data usage except through database interfaces, also transparently. Thus it is again slightly higher effort, but is still relatively easy.
Finally, application-level encryption provides all of the benefits of all lower-level approaches, plus any compromise requires application-level access. Clearly this is the most secure approach—but also the highest effort, because it is not transparent to applications. Format-preserving data protection methods, like SecureData’s Format-Preserving Encryption and Secure Stateless Tokenization, reduce impact significantly, both by avoiding the need to change schemas and by enabling application use by most applications in the data’s protected state. However, implementation effort is still significant.
Which finally gets back to my original point: adding encryption at a lower level because it’s easy can sometimes be worse than doing nothing at all, even if it is protecting against some threats. The secondary effects that can ultimately compromise security include auditors who sign off simply “because the data is encrypted” and management (and staff!) who believe that the problem is solved, and thus refuse to consider more effective application-level approaches to complete the job. In other words, “Yay, we took care of that! Now it’s back to Facebook”.
Such companies wind up complacent, secure in the knowledge that they have encrypted their data, without realizing how little protection that encryption affords. According to Gartner, over three quarters of attacks are at the application layer. Who can afford only 25% protection?
Lower-level protection methods are fine as limited-term solutions: they do add some security. Just don’t think they provide as much risk mitigation as an application-level approach! The responsible thing, as a security professional, is to put those in place now if you need a quick solution, but to plan to continue the project and upgrade to a more secure, application-level solution.
About the Author
Phil Smith III is Senior Architect & Product Manager, Mainframe & Enterprise, at Micro Focus, formerly HPE Software. He is the author of the popular blog series, Cryptography for Mere Mortals.
*** This is a Security Bloggers Network syndicated blog from HPE Security – Data Security authored by Phil Smith III. Read the original post at: http://feedproxy.google.com/~r/voltage/VDQg/~3/2vWRmuuwNvs/