SBN

Cybersecurity Lessons from the Pandemic: Metrics and Decision-Making

We have discussed previously, such as in my May 18, 2020 BlogInfoSec column, some of the more challenging characteristics of data, such as those relating to value and uncertainty, which are generally not given adequate consideration. This is because these types of data may be much more costly and difficult to collect and analyze, and consequently they are often ignored. We also have issues with accuracy and variability of data that arise from factors such as too small sample sizes and inaccuracies in testing and monitoring methods and tools.

While reading a particularly disturbing article on brain damage from the coronavirus,[i] which was based on examining 43 patients, I was struck by a comment from Dr. David Strain of the University of Exeter Medical School, which is as follows:

“The main limitation is that we do not know what the denominator [is] so we don’t know how frequently these complications arise …”

This means that researchers really don’t know how significant raw numbers are if they don’t know the size of the population and the prevalence of a particular condition throughout the entire or a representative sample thereof. Is this a major issue or one that affects only a few? This is important to know if the results become the basis for a major effort to track down, isolate and possibly treat such cases and their contacts.

A similar problem arises with understanding the dependence of the number of positive coronavirus cases on the number of tests administered (as well as the method of selection, the accuracy of the tests, etc.). It is generally assumed that the metric to watch is the percentage of positive cases, also known as the “positivity ratio,” rather than the raw numbers of tests and of those shown to be positive for the virus. But that metric has limitations, too. The important decision derived from the results of the tests currently is to use the information on positive cases to track and isolate those who have been in contact with the positives—a virtually impossible task when the numbers get to be very large, as has happened in many epicenters. Ideally the denominator would be everyone, and all positives and their contacts would be followed up, tested and isolated. Some countries have achieved this, although even then resurgence can follow due to infected visitors, for example. Still, it remains a question as to what an acceptable target positivity rate should be. It seems that the upper limit is that which might lead to the overwhelming of the healthcare system—which is an exploitation of front-line healthcare workers and others who risk their own health and lives to take care of others.[ii] This form of misguided management to an inappropriate metric guarantees the continuation of the pandemic.

The metric that should be targeted is zero cases, which would eliminate the virus altogether. Some countries appear to have achieved this latter goal, at least for the time being, as I observed from global statistics as of July 11, 2020.[iii]

There is a similar tendency at work in cyberspace, where information security management and workers are put in untenable positions, exploited and then blamed when a cyberattack is successful. Rather than invest in the necessary security measures and tools, management is willing to skimp on information security and let the infosec staff take the brunt of major attacks, even though the staff are constrained through underfunding and they often lack adequate executive and user support. Some researchers have suggested a transformation of security behavior to address the shortcomings of this common current case.[iv]

In 2004, I was on a panel at a security conference and panel members were asked by the moderator what we thought was the role of CISOs (Chief Information Security Officers). I replied that their role was to take the blame when things go wrong, much to the consternation of other panel members, who included CISOs from major financial and technology companies. Many cases of CISOs taking the hit have since been reported in the press, even in cases where they and their security staff may have requested mitigation tools from senior management and were refused the funds. Indeed, in a case that I observed personally, a CIO (Chief Information Officer) admonished his CISO for presenting so substantial a budget, saying that the CISO was trying to assign the liability for future attacks to the CIO. This would be because the CISO had requested mitigation tools that had been rejected. The CIO’s instructions were to delete the offending budget items. A short time later, the same company was the victim of a data breach that ended up costing well over a million dollars, but which could have been avoided if a particular inexpensive request had been granted and a security tool put in place. The staff took the blame and the CIO plead innocence.

For both the coronavirus pandemic and cybersecurity risk, participative, supportive and committed executives are key and the culture of both virus victims and computer users comes from leadership, who need to be sensitive to the needs and capacities of front-line workers and have policy and directives that manage to zero cases rather than some presumed capacity to respond.

Cybersecurity metrics on threats, attacks, vulnerabilities, and incidents are often based on relatively small and somewhat biased data samples, yet we are forced to make decisions about mitigation regardless, because that is the best information that we have. Those decisions, like with the pandemic, may default to some predetermined level of acceptability. For example, as long as victims and insurance carriers are willing and able to afford the costs of ransomware, then organizations will manage to that level of pain and have little incentive to address the basic underlying problems. As I related in my BlogInfoSec column of September 23, 2019, there is a suspicion that some of the ransom payments support terrorist groups, which is like the flip side of the pandemic situation, which is to say neither endangering healthcare workers nor supporting terrorists should be acceptable at any level.

Some metrics are based on simple measures, such as a particular number, but often metrics are ratios where the denominator may be too narrow for accurate assessments and reasonable decision-making. Furthermore, the accuracy of the numerator and the selection of respondents are often in question. Nevertheless, it is usually better to make decisions based on some metrics than on none at all, although that may be debatable in some cases, especially when the metrics are known to be deficient even to the extent that a different—even opposite—decision might have been made if the full story had been known.

The pandemic has shown how weak data collection and metrics creation can be and how dangerous are decisions based on inadequate metrics or the inability to understand the import and implications of known metrics. Data underlying metrics should be provable, accurate and indisputable and metrics derived from the data need to be actionable—otherwise what use are they? But metrics can also be misleading and later shown to be deficient or defective, as mentioned above. This underlines the importance of determining ahead of time what data should be collected, how they should be validated, how they should be organized into metrics, and how those metrics should be used to manage particular situations.

In cybersecurity, we have an abundance of cases where we don’t know enough from the metrics to come up with effective and justifiable risk mitigation tactics and strategies. And we are particularly vulnerable when it comes to anticipating how particular circumstances will play out and how to prepare for those eventualities.

In any event, metrics, particularly security metrics, can only take you so far. Beyond that, we are talking models and prediction, and risk assessment and management, which are challenging topics for future columns.


[i] Jessie Yeung and Lauren Mascarenhas, “Coronavirus pandemic could cause wave of brain damage, scientists warn,” CNN Health, July 8, 2020, Available at https://www.cnn.com/2020/07/08/health/coronavirus-brain-damage-study-intl-hnk-scli-scn/index.html

[ii] It was reported, as of July 10, 2020, that more than 1,000 healthcare workers in the U.S. have died from COVID-19 and it is thought that many of them caught the virus on the job.

[iii] H. Pettersson, B. Manley and S. Hernandez, ‘Tracking coronavirus’ global spread,” CNN Health, July 11,2020. Available at https://www.cnn.com/interactive/2020/health/coronavirus-maps-and-cases/

[iv] Shari Lawrence Pfleeger, M. Angela Sasse and Adrian Furnham, “From Weakest Link to Security Hero: Tranforming Staff Security Behavior,” Homeland Security and Emergency Management, Vol. 11, No. 4, November 2014. Available at https://discovery.ucl.ac.uk/id/eprint/1460572/2/jhsem-2014-0035.pdf


*** This is a Security Bloggers Network syndicated blog from BlogInfoSec.com authored by C. Warren Axelrod. Read the original post at: https://www.bloginfosec.com/2020/07/20/cybersecurity-lessons-from-the-pandemic-metrics-and-decision-making/?utm_source=rss&utm_medium=rss&utm_campaign=cybersecurity-lessons-from-the-pandemic-metrics-and-decision-making