Users Failing Phishing Simulations? That’s ok

mistake-1966448_1920Phishing simulations come with a range of emotions for the users who interact with them. Some will simply ignore them, others may fail by clicking on a link or attachment, and for the well-trained, they may even report them.

Even if there is a negative outcome, training leads and organizations should not be worried, yet. Just like in school, these simulations are just that, simulations or quizzes. They are designed to prepare users for the real test or a potential attack or phishing lure from a threat actor.

A single failed quiz typically doesn’t lead to a full flop, but repeated ones certainly can. This week we’re going to take a look at some of the logic and benefits that come from failed phishing simulations, and why it’s not the end of the world.

Identifying Gaps in Current Training Program

Fail once, shame on them.

Fail twice, shame on you.

Simulated phish is designed to fool the recipient, and depending on who is creating it, they truly can be some of the most crafty, targeted lures out there. However, if your users are repeatedly clicking on the links or attachments within a lure that is a sign that your training program is not effective. When this occurs it’s important to get to the root of the issue:

  • Is the frequency an issue?
  • Are users not remembering the information?
  • Are your phish just that good? (probably not)

If your training only occurs once or twice a year, there is a greater chance that a user will fail a simulation. Keep in mind that simulations are not a training tool, but a testing tool, designed to reinforce information from security awareness training. This can also be a symptom of why users are not retaining the information as well.

Then, there is also the slight possibility that your training lead happened to create such a diabolical simulation that people clicked on it anyways. If either of these situations occurs, there is some good news to take from all of this, starting with plugging holes in your training program.

Identifying Gaps in Training

Phishing simulations come in many different forms, and if you are pulling one from a library of existing ones there is a possibility that it doesn’t have a specific focus. In other words, a simulation is just that, a simulation. When sending or creating a simulation, it should reinforce information and learnings from the most recent training. However, if the training you have in place also focuses on the big picture, shoves a bunch of information between one and four sessions, and was completed months ago, none of that matters.

To a training lead, this should raise a red flag and highlight a few gaps in the program such as:

  • Increasing training frequency
  • Minimizing scope of each training
  • Minimizing the time required for each training session

Reinforce Seriousness of Training

During most employees onboarding process they are required to take some form of security awareness training. Then each year following they get a refresher, sometimes with the same exact training materials, to check the box again. In these situations, and really most others, users simply do not take security awareness training seriously.

They see examples of phishing lures with horribly broken English, they roll their eyes at the gamification and terrible misuses of pop culture, and they of course, try to shove it all in while doing actual work at the same time. In some cases those long training videos just play behind their work, occasionally getting the alt+tab switch to move it forward. Think these users take the training seriously? Of course not.

What happens when they fail a phishing simulation though? Typically they run through the 9 stages of grief in about five minutes, ending with a few choice words for the training lead. There is a bit of shame and embarrassment that tends to linger though, and that is when you strike like a cobra.

The users will take the training seriously, and that is exactly when you issue them point of failure training. This training will highlight the information they need to avoid a failed simulation in the future and that they shouldn’t let their guard down. From here, users should start to improve, and that means so should your tactics.

Training Scores are Improving. Now What?

As users improve, so should your tactics. Training leads need to think like threat actors. That means sending phish out at times when users may only be on their mobile device, use information and subject lines that are personal to them, and ultimately act as a targeted attack or spear phishing.

Did they fail after you kicked things up a notch? See earlier tips, but also understand that pushing users to their limits with security awareness training, adjusting the tactics and distribution vectors, and terrorizing them a bit is all in the spirit of protecting the enterprise.

Training Results Prove Budget Need for Training and Technology

Every organization needs some form of security awareness training and also technology to protect the perimeter of the business. If ever there was an easy button to increase the budget around either element, even with a persuasive argument prior, results from phishing simulations will certainly drive the point home.

Making a Weakness a Strength

As tactics improve and the training program is strengthened, businesses need to focus on the bigger picture. There is no denying that human error is often the easiest to blame when it comes to security breaches, but that’s just another sign of ineffective training.

A strong training program should be designed to encourage users not to just ignore what appears to be spam or suspicious, but actively reports it to the security team as quickly as possible. In doing so, the security team can more rapidly mitigate the threat, which in turn a less trained user may attempt to interact with. Empowering well-trained users to act as an extension of the security team should be one of your primary goals, but the other elements mentioned in this article need to be in play before that expectation should be made.

*** This is a Security Bloggers Network syndicated blog from The PhishLabs Blog authored by Elliot Volkman. Read the original post at: