SBN

Cybersecurity Lessons from the Pandemic – Positive and Negative Feedback

Systems use negative feedback in order to converge to stability and equilibrium (a positive quest). Positive-feedback systems diverge, which leads to instability and sometimes surging out of control (usually a negative outcome). Negative feedback inhibits and positive feedback amplifies. Each has its role in nature. Body temperature, for example, is kept within range as a result of negative feedback (known as homeostasis), whereas exponential increases in COVID-19 cases result from positive feedback, where a reduction in cases (resulting from lockdowns) may lead to relaxation of controls such as gathering in crowds, which in turn leads to an increase in cases (positive feedback of ignoring restrictions). If continued over time, this kind of system can result in oscillations around a trend where cases fall (when controls are enforced), then increase (when controls are relaxed), fall again (when controls are again instituted), then rise (when controls are again relaxed), etc. Equilibrium is achieved in the pandemic situation if the number of cases goes down to zero and is maintained at that level. It is also possible to have equilibrium if oscillations are maintained at a very low level and new cases are quickly tested and isolated.

Here are some examples of various forms of feedback. If you are driving a car through, say, a 40 miles-per-hour zone, you can see on the speedometer whether or not you are exceeding the limit. In some vehicles, the speed limit may actually be shown on the car’s display, which can be useful information if, for example, you miss the speed-limit sign or forgot which zone you are in. Clearly, the feedback provided by the speedometer allows for you to adjust your speed if you are over the limit and wish to avoid a ticket. In any event, drivers do exceed the limit with regularity. It has been found, however, that those large displays blaring out your speed to the world are effective in reducing drivers’ speed and avoiding accidents. Even though those displays are seemingly not used for enforcement, there is always a lingering doubt. Also, there could be a police car right behind you. But, by then, it may be too late! Perhaps it is the knowledge that the highly-visible display exists that is the deterrent, rather than what the reading is. The additional feedback stabilizes the system.

In the case of cybersecurity, blocking risky websites and warning users that they have contravened corporate policy provides negative feedback, whereas just blacklisting websites likely will not be as effective.

As another example, paying ransoms because you have been attacked with ransomware, which encrypts your data and demands payment in order to provide decryption keys, is an example of positive feedback because the more victims pay up, the more criminals see success and expand their ransomware activities. If victims all refused to pay up, then ransomware would no longer be profitable and so would be curbed.

And, of course, you have the overall situation where installing security software might (and usually does) allow you to take greater chances, as do seat belts and air bags in a car. I recall a presenter at a conference asking “Why do we have brakes in a car?” The usual answer same with is “To make it stop.” “No,” she said, “To allow you to go faster.” It is the same with cybersecurity—we take bigger risks if we think that we are protected. This can lead to oscillations about a trend that, in the case of cybersecurity, keeps climbing.

Add that to the belief that humans are really poor at assessing risks. In a September 8, 2020 article by Lia Kvatum in The Washington Post, which is available at https://www.washingtonpost.com/lifestyle/magazine/why-human-brains-are-bad-at-assessing-the-risks-of-pandemics/2020/09/03/7395321c-dd9d-11ea-b205-ff838e15a9a6_story.html, the reporter raises the question as to why some people take the risk of getting COVID-19 more or less seriously than others. The same applies to cybersecurity risks. As the article states: “Risk perception is … highly individual.”

The conditions of convergence, divergence and equilibrium may be envisioned as sine waves, where increasing amplitudes of successive peaks and valleys indicate divergence or instability, decreasing amplitudes as convergence (not necessarily to zero) or stability, and steady amplitudes as equilibrium. But know that these curves only show the outputs of systems—not the root cause of variabilities. For that, you need to fully understand the interactions and motives of the various players. Some are obvious, such as a desire to acquire money, but others may be more esoteric and not readily determined. For example, it has been asserted that some ransoms are used to fund terrorism (see the BlogInfoSec column of September 23, 2019). Note that there is often some ulterior motive that may not be apparent to victims and which might have changed their responses has they known. The use of deterrence to avoid attacks requires that you know your opponents’ weaknesses.

Furthermore, as systems become more complex and interrelated, it is not always easy to identify what is causing a system to leave an equilibrium state and become unstable. This is a problem that is only getting worse as more and more systems interoperate. One approach is to introduce negative feedback wherever possible so that the impact of any disruption—whether intentional or accidental—can be reduced to a manageable level. Sometimes we respond to an observed situation in ways that only make things worse. For example, if a system does not respond in a timely manner, we tend to keep hitting the Enter key, and that only adds more messages to the queue. A better response is to advise users that their request is being addressed and is subject to a slight delay. This kind of feedback serves to stabilize an overwhelmed system.

Systems designers, together with information security professionals, need to consider how systems might encourage users to behave in a particular desired manner. Notifications as to contravening policy or that someone is aware that you are trying to game the system can be very helpful when preventative methods are not fully effective. The idea is to stabilize the system and ensure that it does not go out of control. This is particularly important for cyber physical systems, but applies generally across the board. Nature is looking for convergence to an equilibrium—and so should you.


*** This is a Security Bloggers Network syndicated blog from BlogInfoSec.com authored by C. Warren Axelrod. Read the original post at: https://www.bloginfosec.com/2020/09/14/cybersecurity-lessons-from-the-pandemic-positive-and-negative-feedback/?utm_source=rss&utm_medium=rss&utm_campaign=cybersecurity-lessons-from-the-pandemic-positive-and-negative-feedback