Chernobyl and its Cyber Lessons, Part 2: Incident Response

What can be learned from the incident response of the Chernobyl nuclear disaster in 1986?

HBO’s recent “Chernobyl” series, which retold the story of the nuclear accident that threatened much of Europe in 1986, made for compelling viewing. The accident was said to have helped prompt the fall of the Eastern block and bring about a fundamental shift in global politics.

On April 26, 1986, reactor No. 4 exploded, throwing radioactive material into the night sky. We may never know how many people suffered as a result of this accident. The official death toll was 31. Or 54. Or several thousand. Or 93,000.

I looked at the contributing factors in my previous blog; however, I thought it would be interesting to look at the incident response that followed the explosion and what we can learn from this in a cyber sense. Please bear with me, there is a lot to cover!

We should, of course, note that the Soviet Union was a very closed society. Mistakes were considered impossible in a Communist state and were often covered up; those who were involved could end up in Siberia (or worse). Therefore, expectations of transparency would be misplaced. However, what can we learn about incident response and management from Chernobyl?

What Happened?

It took three minutes from the first explosion for the fire alarm to be raised. There was confusion in the control room as to the scale of the problem, with the chief in charge insisting that the reactor was intact and dismissing the debris as parts of the emergency tank.

Staff was sent to see what condition the reactor was in, and despite reporting back that the reactor had been destroyed, their findings were dismissed.

Telephone lines were down. The firefighters knew little about radiation and were ill-prepared for what they found, arriving without protection. It was reported that they picked up the graphite scattered around the site.

Thirty minutes after the explosions, the reactor was still thought to be intact.

A crisis meeting was convened and police assistance was sought to seal off the town. Thousands of police arrived without protective clothing, dosimeters or information on how to handle radiation or radioactive material.

Three hours after the initial explosion, it was still reported that the reactor was intact. Further staff was sent to survey the reactor. They reported that it had been destroyed. This report was also dismissed as being inaccurate.

All fires were extinguished, except fire within the remains of the reactor, at around 6:35 a.m.

At 8 a.m., the shift changed and 286 men arrived to continue building the 5th and 6th reactors.

Some 18 hours after the accident, a government committee was established.

On Sunday, April 27, helicopters started dropping sand, boron and lead into the stricken reactor.

On Monday, April 28, a nuclear power plant in Sweden detected high levels of radiation as part of a routine check on the soles of employees’ shoes.

Moscow TV announced that there has been an accident at the nuclear plant. “Measures are being taken to eliminate consequences of the accident,” it was reported. “Aid is being given to those affected. A government commission has been set up.”

A nuclear research laboratory announced that a “maximum credible accident” had occurred at Chernobyl, mentioning a complete meltdown of one of the reactors occurred and that radioactivity had been released.

On Tuesday, April 29, an American satellite captured images of Chernobyl, showing the roof blown from the reactor, just as the Soviets released photos of the disaster, which were doctored to remove the smoke.

I could go on … I find this story both fascinating and horrific in equal measure.

What Does This Tell Us?

When an incident happens—particularly a big one that impacts critical services—confusion reigns. Other events often come into play, such as senior members of staff being out of contact. It’s an unpleasant and stressful place to be.

Information comes at you in waves—some good, some bad, some reliable, some not—and sometimes you don’t get information at all.

The incident ripples throughout the organization and rumors start, which often are passed off as fact.

The response can be inadequate when staff members undertake roles that they are not prepared for nor trained in.

Perhaps worse of all, our customers tell us the extent of the problem, further damaging reputation.

What Can We Learn?

Have an incident response and management plan that includes communication. You should know who your stakeholders are and what their primary interest is, and then use this to formulate that communication plan. Test the plan in as realistic a way as possible. A leisurely stroll through it over coffee and cakes is unlikely to stress the component parts effectively, including staff.

Ensure the incident management plan clearly defines roles and responsibilities and use the right people to do the right jobs wherever possible. This helps to reduce the risk of misinformation. Trust your staff and believe the information they give you—if you have recruited effectively and provided training, they will give you the information you need.

You won’t know everything about the incident straight away, but generally, it is best to be open, honest and transparent about the fact you have had an incident. If you don’t, others may well do so. By disclosing it, you control the communication and the narrative. You may not know everything yet, but as the police say, “We have limited information at this time.” You can always provide more information later—social media makes this quick and easy.

If you are putting forward a member of senior management to speak to the media, ensure they are effectively briefed about cyber and at least have a grasp of the jargon associated with the incident. And if this is not possible, support that executive by having the CISO alongside, who can add context and handle questions of a technical nature. This does not signify weakness on behalf of the executive.

Once the incident is over, learn well from it, ensuring that what you learn is embedded across all areas for incident response where a similar event could occur—and let your stakeholders know that you’ve learned.

That incidents happen is a fact of cyber life. However, if you prepare properly, you can at least manage reputational damage.

Simon Lacey

Featured eBook
The State of Cloud Native Security 2020

The State of Cloud Native Security 2020

The first annual State of Cloud Native Security report examines the practices, tools and technologies innovative companies are using to manage cloud environments and drive cloud native development. Based on a survey of 3,000 cloud architecture, InfoSec and DevOps professionals across five countries, the report surfaces insights from a proprietary set of well-analyzed data. This ... Read More
Palo Alto Networks
Simon Lacey

Simon Lacey

Simon Lacey, Principal Consultant at CRMG is a senior information security and governance specialist, with over 20 years’ experience in both the public and private sector. Simon has worked within electrical engineering, the NHS, private healthcare and at the Bank of England, where he was the policy and standards manager and lead ISO27001 certification. Simon's considerable experience in risk management, education and awareness, strategy development and consulting to senior management brings a holistic view of cybersecurity and how to create a resilient organization.

simon-lacey has 3 posts and counting.See all posts by simon-lacey