SBN

The CISO’s White Whale: Measuring the Effectiveness of Security Awareness Training

Boats attacking whales | Source: New York Public Library Digital Collections

I have a hypothesis about end-user security awareness training. Despite heavy investment, most – if not all – CISO’s wonder if it does anything at all to reduce risk. 

There, I said it. Do you disagree and would love to prove me wrong? Good!

How can you prove me wrong? Security awareness effectiveness metrics, of course. 

Many security awareness metrics don’t tell us it’s working. They report something related, like how many people attend training, pass/fail rate on post-training quizzes, or sentiment surveys. I presume most CISO’s want their security awareness training to reduce risk. How would you know if it does?

Therein lies the CISO’s white whale. CISO’s don’t need (or want) metrics that prove the program exists or count the number of employees that completed training. CISO’s need metrics that show employee behavior is noticeably influenced and measurably changed, proportional to the level of investment.

How do we do it? A little bit of measurement fundamentals, some Clairvoyant Test, and some creative thinking.

Metrics that don’t work

Here’s what doesn’t work: metrics that simply report the existence of a security awareness training program. This helps click the compliance checkbox, but it doesn’t tell us if security awareness training measurably reduces risk. Below is the single most common metric I’ve seen in various security programs, shown in the table below.

Metric Status Comments
Security Awareness Training Effectiveness Green 98% of employees completed mandatory training in Q1

First, the entire metric is a semi-attached figure. The semi-attached figure is a rhetorical device in which the author asserts a claim that cannot be proven, so “proof” is given for something completely different. The proof (percentage of employees that completed training) does not match the claim (Security Awareness Training is effective). Training completion rate doesn’t tell us if or how user behavior is influenced. 

Next, let’s ask ourselves: Does this metric pass the Clairvoyant Test (or the Biff Test, if you’re a Back to the Future fan)? Quick refresher: a metric is well-written if Biff Tannen, with his limited judgment and problem-solving skills, can fetch the answer with Doc’s DeLorean. In other words, a good metric is clear, unambiguous, directly observable, quantifiable, and not open to interpretation.

The answer is no. It does not pass the Biff Test. There are several problems with the metric.

  • Ambiguous: It’s not exactly clear what we mean by “Security Awareness Training” – many companies have different levels of training, at varying cadences, for different job functions.

  • Observable: The metric itself is not directly observable; the measured item is implied, but not explicit. 

  • Quantitative: The measured object, Security Awareness Training Effectiveness, is fuzzy; it’s not measurable as it is. The measurement, “Green,” is used as an adjective, not a measurement. It looks pretty but doesn’t tell you anything.

  • Objective: “Effective” is not an objective measurement. Different people will come to different conclusions about whether or not something is “effective.”

Well, how could we measure security awareness training effectiveness?

It can be difficult to measure intangibles, like “effective,” “love,” and “high risk.” However, it is possible and done every day in the actuarial, medical, and engineering fields, to name a few. When I get stuck, I remind myself of Doug Hubbard’s clarification chain from How to Measure Anything: Finding the Value of Intangibles in Business. The clarification chain is summed up as three axioms:

  • If it matters at all, it is detectable/observable.

  • If it is detectable, it can be detected as an amount (or range of possible amounts.)

  • If it can be detected as a range of possible amounts, it can be measured.

What observable thing are we trying to measure? What would you see that would tell you security awareness training is working? Keep working on the problem, decomposing, whiteboarding, talking to subject matter experts until you have a list. 

Applying that logic to our problem, we realize that security awareness training isn’t measured by one item but rather a collection of items that show how employee behavior changes when security awareness training is visibly working. 

With this in mind, we can now see that “effective awareness training” isn’t one thing, but rather many different and distinct behaviors and activities that we can observe – and therefore measure.

For example, if security awareness training is effective, you may observe the following:

  • An increased number of suspicious emails forwarded to security teams

  • A decrease in endpoint malware incidents

  • Fewer clicks on employee phishing simulations

  • Employees challenge tailgaters in locked areas, as observed through security cameras

  • Fewer incidents of business email compromise and other attacks that target end-users

This list can be directly turned into metrics.

Metric Measurement Data
Suspicious emails reported Quarter-over-quarter change in suspicious emails forwarded to Security by end-users 5% increase from the previous quarter
Endpoint malware Quarter-over-quarter change in detected malware infections on endpoints 13% decrease from the previous quarter
Phishing tests Percentage of employees that click on simulated malicious links during a phishing test 3% of employees clicked (2% decrease from the last test)
Tailgating incidents Quarter-over quarter change in reported tailgating incidents 1% increase from the previous quarter
Business email compromise Quarter-over quarter change in detected business email compromise incidents 5% decrease from the previous quarter
Awareness Training Coverage 98% of employees completed mandatory training in Q1 98% in the previous quarter
Awareness Training pass/fail rate 75% of employees passed the final test on the first attempt 60% in the previous quarter

As you’re reading through the metrics above, take note of two things:

  • Just like there’s no single characteristic that tells you I’m a safe driver, no single metric tells you awareness training is working. It’s a collection of attributes that starts to paint a broad picture of how behavior is influenced over time.

  • Notice the last two; they look very similar to the metrics I said didn’t work at the beginning of the post. They don’t work to measure effectiveness, but they do work to measure coverage of controls. Coverage is a very important metric and one I’ll probably cover in a future post.

Start to monitor each of the above items and track trends and changes. After you have a baseline, decide your thresholds, and – boom – you have your first set of security awareness training KRI’s and KPI’s. These can be reported on their own, aggregated together, or used in a risk analysis. 

Wrapping Up

Hopefully, this gives a good starting point to answer the burning question all CISO’s have: is my security awareness investment working? 

Be forewarned: you must keep an open mind. Your metrics may reveal that training doesn’t work at all, and we need to build better systems rather than cajole our user base into doing our job for us.

Or is that too revolutionary?


*** This is a Security Bloggers Network syndicated blog from Blog - Tony Martin-Vegue authored by Tony MartinVegue. Read the original post at: https://www.tonym-v.com/blog/the-cisos-white-whale-measuring-the-effectiveness-of-security-awareness-training