SBN

More Musings on Reverse Security Theater and “Security Signalling”

“Security theater” (a term widely attributed to Bruce Schneier) “refers to security measures that make people feel more secure without doing anything to actually improve their security.” This concept essentially denotes fake, “feel-good” security, such as measures and controls that make one feel secure without delivering any measurable risk reduction.

Lately I’ve been thinking a bit about the opposite phenomenon, “reverse security theater.” Specifically, a situation where a system or an application is measurably and effectively secure, but makes people feel insecure or at least fails to make them feel secure. Or, similarly, a situation where given a choice between two options — one more secure and one less secure — the organization chooses the less secure option, but “for security reasons.” (As a side note, there are many, many situations where we have absolutely no idea what is more secure, but there are some where we definitely do e.g. “what is more secure, SSH or Telnet?” won’t cause a big uproar, right?)

Security theater ultimately points at the irrationality of many security decisions, both in the realm of physical security (Mr Schneier’s favorite example) and in our realm of cyber. However, the irrationality goes both ways — hence “reverse security theater.” People (yes, security leaders are people too) may feel secure under inadequate security and they may feel insecure under effective and robust security.

As somebody wisely pointed out in the related twitter thread, security theater is a case of “security signaling.” Let’s try to use this post to separate security from security signalling because it turns out both may have value.

Now, the initial inspiration for this came from looking at the domain of cloud security, but since that time I have noticed it in more places. Cloud is still the place of many fears for security professionals, to be sure (but fewer of them nowadays, as I hear).

In fact, Jay Heiser’s classic “Clouds Are Secure: Are You Using Them Securely?” starts with this passage: “CIOs need to ensure their security teams are not holding back cloud initiatives with unsubstantiated cloud security worries.” But guess what? If I recall correctly, this line hails from 2011 or so, and these “unsubstantiated worries” are still around, nearly 10 years later.

Lately, I’ve been looking a bit into data security, encryption and, particularly, key management in the cloud. I do see examples where a cloud-based (and, frankly, more secure by all reasonable standards) approach is sometimes avoided in favor of a legacy approach with known security issues.

For example, are your encryption keys more secure in an HSM device that you run on your own or in a well-designed software system run by people who are perhaps the best on the planet in this area? There are definitely people who think it is “more secure” if they keep the encryption keys vs the cloud provider does it (note, that this is different from all the geopolitical and/or legal reasons for key possession — I am talking only of the pure technical security merits).

BTW, I suspect that in the cloud this is explained by the old classic — lack of control. The same lack of control argument is often used to explain why so many people have fear of flying and so few have “fear of driving” (is this even a thing?) even though driving kills many orders of magnitude more people worldwide. To push the analogy further, there is no “fear of IT”, but there is definitely “fear of cloud.” Frankly, most of us have seen examples of really “scary-insecure” IT practices; it is then more logical to be afraid of those IT practices compared to those elite cloud provider practices.

Another fun twitter thread deals with the subject of “security in hardware.” For some people, “security in hardware” (be it a TPM chip, hardware-based memory protection, an HSM or even some hardware appliance) signals good security, while for others (many others!) they signal hard-to-fix bugs, challenges with updates and inflexible choices. This particular example indicates that the same message may sound “very secure” to some and “likely insecure” to others. This creates a truly befuddling mix of real and imagined security!

Or, look at the mobile OS: based on most data I’ve seen, modern mobile OSs are remarkably secure — even when users do (some) risky things. However, I’ve met enough people worried about the mobile threat scenarios, some of which look like the classic Bruce’s “movie plot” threats. What’s the story here? Unlike the cloud case, this is not about the lack of control…

Similarly, some security practices like “frequent password changes” have been historically (and, perhaps, not entirely logically) associated with good security. Today, I feel that frequent password changes are security theater. However, well-designed MFA is proven effective by data. Still, there is enough media noise about “weaknesses in 2FA” so that I won’t be surprised if some perceive a frequently changed password to be more secure then, say, SMS-based 2FA coupled with an good password set once.

Finally, sometimes compliance contributes to this phenomenon. It does so by implicitly (and sometimes, explicitly) boosting the less secure choices as “time proven” or favored by auditors, and by implication, more secure. For sure, all of us met security professionals who feel more secure when there are more firewalls in the environment. Firewalls — when deployed properly — are likely to contribute to risk reduction! However, they clearly contribute a lot to security signalling as well. Now, a badly configured firewall is pure security theater.

Got more “feels secure / is insecure” or “feels insecure / is secure” examples to share? Hit the comments!

So, What to Do?

Here are some ideas:

  • Accept that many security and risk decisions are emotional, and based on trust and fear.
  • Think of the threat models whenever you deploy security controls; however, also test how the measures will be perceived.
  • Focus on both measurable security and “security signalling” (where needed), otherwise the risk is that people will choose less secure options thinking they are choosing the more secure ones out of an emotional need to feel more secure.
  • Finally, understand that “Security is an emerging property of a system” (in a book), not a feature or a control. The last point is driving a lot of my thinking lately.


More Musings on Reverse Security Theater and “Security Signalling” was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story.


*** This is a Security Bloggers Network syndicated blog from Stories by Anton Chuvakin on Medium authored by Anton Chuvakin. Read the original post at: https://medium.com/anton-on-security/more-musings-on-reverse-security-theater-and-security-signalling-40beef667d65?source=rss-11065c9e943e------2