There’s an old joke that goes something like this: “To err is human, but to really screw things up you’ll need a computer.”
Of course it’s funny, but as we all know computers just do what they’re told (or programmed) to do. They’ll do it to the letter, time and time again, without thinking.
And if someone hasn’t had the foresight to predict every situation that a computer program may encounter (unexpected end of file, divide by zero, too much data to fit into the space allotted for it) then things might go wrong.
In short, it’s probably fairer to say:
“To err is human, but to really screw up you’ll need a human to program a computer.”
The point is that even the most carefully thought through systems and processes might contain bugs and unexpected wrinkles which only come to light when something disastrous happens.
Earlier this month something bad happened in Hawaii. A mistake by a human operator saw a computer system send a terrifying message to residents of Hawaii, warning that a missile was about to strike:
“Ballistic missile threat inbound to Hawaii. Seek immediate shelter. This is not a drill.”
Thankfully, the message turned out to be a false alarm. But it took a full 38 minutes for the follow-up “Don’t panic” message to be sent to citizens who had been scurrying to find shelter or reach loved ones.
There has been much said about how it was possible for an incorrect missile warning message to be sent, but I’m actually more interested in why it took so long to communicate the truth to a petrified public.
One issue seems to have been that although there were processes in place for sending out missile warnings, there weren’t such smoothly-run systems for releasing corrections rapidly.
Furthermore, the office of Hawaii’s governor David Ige knew that it was a false alarm just two minutes after the alert had been sent state-wide to mobile phones. And yet it took Ige 17 minutes to send a tweet saying there was no missile threat.
There is NO missile threat. https://t.co/qR2MlYAYxL
— Governor David Ige (@GovHawaii) January 13, 2018
The reason? The Governor of Hawaii had a simple explanation. He forgot how to log into Twitter:
“I have to confess that I don’t know my Twitter account log-ons and the passwords, so certainly that’s one of the changes that I’ve made. I’ve been putting that on my phone so that we can access the social media directly.”
Clearly he wasn’t following the example set by some of the staff at Hawaii’s missile alert agency, who were keeping their passwords on Post-it notes.
On reflection it’s clear that human error, compounded by poor user interface design, caused the bogus missile alert to be sent out. Such things shouldn’t happen, but – unfortunately – sometimes they do happen.
And when they do happen you need to ensure that you have a communication strategy in place.
You may not know precisely what form a cybersecurity incident may take inside your organisation, but you should plan now with your internal teams the possible scenarios – and make a communication strategy a key part of your response plan.
That means not only determining who needs to be told about an incident inside your business, but also how it will be communicated internally to staff, your customers, your partners, the press, and regulatory bodies.
Think now about how social media might be able to help you get a message out to your clients and the media, especially if other systems (such as your website) may not be operating effectively.
Don’t leave it until you are in the middle of a rainstorm to remember where you packed away the umbrellas.
Because you may try to blame the incident itself on technology somehow fouling up, but the truth is that many of those watching you will end up judging you less by what went wrong, and more by how you handled the fallout.
This is a Security Bloggers Network syndicated blog post authored by Graham Cluley. Read the original post at: Business Insights In Virtualization and Cloud Security