Successfully Influencing Employee Security Behavior

With phishing scams common and the risk of security breaches made more likely thanks to an expanding threat surface, security teams are debating how to deal with—and in some cases, punish—employees who fail security tests, as well as those who fail cybersecurity quizzes or fall victim to scams such as business email compromise.

A new report from Forrester Research indicates organizations should tread carefully between engagement, empathy and punishment because punishment has the tendency to reinforce employees’ negative perceptions and resentment of the security team. And that can lead to disastrous results.

No Shame in Training

The report noted that shaming and punishment will push users away from engagement and toward disengagement and warned that disengaged employees would be more likely to ignore security policies and put the company at risk.

While listening, coaching and changing processes are all well and good, at some point management may have to face reality and discipline anyone who has been maliciously flouting the rules, according to the report.

The report suggested the creation of an “ethical discipline checklist” which includes the creation of a “socialized, communicated disciplinary policy” that supports the evaluation of the potential impact of any action on an employee’s mental health.

The checklist warns against publicly sharing the names of offenders, using data only in internal “lessons learned” exercises.

Tim Wade, technical director of the CTO team at Vectra, an AI cybersecurity company, explained when responses to security failures are improperly handled there is a real risk of driving negative, accidental behavior underground and disincentivizing users from coming forward and self-reporting.

“This means it’s critical to separate intentionally malicious behaviors from unfortunate but unintentional ones,” he said. “The former may require negative consequences while the latter almost certainly is the byproduct of insufficient support, education and awareness.”

Wade said positive feedback goes a long way toward soliciting buy-in, noting that when users feel like their actions are positively contributing to an organization that they’re committed to, they’ll naturally bias towards self-selecting good behaviors.

“Stop and ask yourself: ‘For this plan to work, does it require things going right? Does it require users always knowing what to do? Does it require things that are secret to stay secret?’” he said. “An environment that’s tolerant to human fallibility makes explicit allowances for things that don’t go according to plan, has fail-safes to detect when those failures occur and even incentivizes self-reporting.”

Bud Broomhead, CEO at Viakoo, a provider of automated IoT cyber hygiene, agreed that the people being trained need to feel engaged and responsible for the security of the systems, devices and data they’re being trained on.

“In other words, security training should involve systems and devices that are commonly used by the person being trained,” he said. “For example, facilities teams that are responsible for cameras and HVAC systems will need training to know how firmware, certificates and passwords are key to securing these critical IoT devices.”

Tying Consequences to the Attack Surface

He said consequences should be weighed according to how they shrink or expand overall risk to the company (e.g. how the attack surface changes).

An example of a common but highly problematic situation is adopting a “set-it-and-forget-it” approach to IoT cyber hygiene where firmware and passwords are never updated.

“This is potentially much more dangerous to the organization, hence it must carry much greater risk weight, because of the large number of devices involved as compared to a single system or application not being secured,” Broomhead said.

Sounil Yu, CISO at JupiterOne, a provider of cyber asset management and governance solutions, pointed out that employees often break security policies and controls, not because they are trying to be malicious but because they are simply trying to get their job done efficiently.

“We want our employees to be clever and creative, so it’s no surprise that employees find ways to circumvent security controls, Yu said. “What is important is that employees share those circumvention methods with the security team, not so that the security team blocks those methods outright, but so that the security team can work to find or build safer, paved paths that enable employees to be even more productive.”

To build trust across the company so that employees feel willing and safe to disclose how they circumvented a security control, the security team needs to keep security simple, open, collaborative, enabling and rewarding, embracing transparency over obscurity, practicality over process and usability over complexity.

“Security training that fits today’s mode of consumption is more engaging. At the present time, that mode is short video clips that draw you into a story that teaches you valuable security principles along the way,” he said. “In addition, security training needs to be appropriate to the skill level of the individual to whom the training is being delivered.”

He noted most security awareness training assumes that everyone is operating at the same skill level.

“This wouldn’t be acceptable for most other disciplines, but this seems to be the norm for security training,” he said.

Nathan Eddy

Nathan Eddy is a Berlin-based filmmaker and freelance journalist specializing in enterprise IT and security issues, health care IT and architecture.

nathan-eddy has 277 posts and counting.See all posts by nathan-eddy