SBN

Is Security Job #1? by Richard Stiennon

We can learn a lot by applying the lessons of the auto industry in the 1980’s to today’s cyber security practices.

I recently caught up with an old friend of mine, Brian Contos, on the Cybersecurity Effectiveness Podcast, produced by Verodin. Check out the episode by clicking on the  player below or listening to it on Spotify, iTunes, and Stitcher.

I began my career fresh out of aerospace engineering school in the Detroit automotive scene. The mid ‘80s was a challenging period for the BigThree plus AMC (remember the Gremlin?). Japanese imports were winning the quality/value war against giant gas-guzzling sedans coming out of Detroit. Toyota and Honda had taken the teachings of Deming to heart and revolutionized manufacturing processes forever.

W. Edwards Deming was a statistics professor at NYU when he invented statistical process control (SPC), a science based on measurement. He could not get traction for his ideas in the US, but Japanese manufacturers welcomed his ideas and, more importantly, applied them religiously. Detroit did not wakeup until Japanese autos made a deep impact on their sales.

But react they did and they soon started to take quality seriously. SPC is simple in concept: measure manufacturing variances from design. In other words, basically measure a large number of parts coming off the line and track those measurements. Set bands of acceptance on each measurement, which would be the tolerances specified on the blue prints (yes, these were still the days before CAD/CAM). The sampling of components was the statistical part, although often 100% sampling was required from the beginning.

The next step was to systematically tighten up those tolerance bands—the “process” part of SPC. I worked on the suspension of the Caprice Classic, Chevy’s full-size sedan. In order to align the wheels at the end of the production line, the assembly workers could add up to 3/4” of shims. Today such tolerances are measured in millimeters.

But there was still an issue: the measurements were being made but the poor quality components were still shipping. There was a command issue.The plant manager had his own metric: number of cars pushed off the line every shift. Nothing could induce him to stop that flow. The Quality Manager reported to the plant manager and was always over-ridden. The fix was obvious: make quality Job #1 and have the quality department report around the plant manager to a corporate VP of Quality. And it worked.

Which brings us to IT security today where in most organizations, the CISO reports to the CIO or CTO. Security is secondary to keeping the email servers running. Perhaps that reporting structure could be changed to fix some of the problems that lead to so many major breaches.  

But the real lesson learned is measuring and responding to security deficiencies. How can we do that? There are plenty of metrics that are often used to measure an IT security department: number of unmatched systems, number of blocked malware, and, of course, spending. But those measurements do not lend themselves to a process that could be used to improve security effectiveness.

Like tolerances on a blue print, we do have something that is specified: security policies. A.k.a. who can gain access, what protocols are allowed, which websites can be visited, etc. We have enforcement points like firewalls, web gateways, and desktop controls. We just need to measure how effectively those policies are enforced and get on a path of continuous improvement. Eventually, the time saved recovering from attacks can be applied to adding more controls (as long as those too are measured).

Applying the lessons of process control to security is a path forward. Let’s shift gears. Instead of practicing the art of security, we should get it down to a science.


*** This is a Security Bloggers Network syndicated blog from Verodin Blog authored by Verodin Blog. Read the original post at: https://www.verodin.com/post/is-security-job-1