SBN

Is AI For, or Against, Cybersecurity?

With the rapid proliferation of so-called AI (artificial intelligence) systems (many of which are really just rebranded expert systems), we cybersecurity professionals are confronted with two critical issues, namely: Can AI methods be used to improve the protection of our data, systems, and networks? And can AI systems be secured effectively?

The distinguishing features between AI systems and ML (machine learning) systems can be confusing. Bernard Marr, a contributor to Forbes, provides an excellent explanation of the difference between AI and ML in a December 6, 2016 article with the clear title, “What’s the difference between artificial intelligence and machine learning?” which is available at https://www.forbes.com/sites/bernardmarr/2016/12/06/what-is-the-difference-between-artificial-intelligence-and-machine-learning/#144622fa2742  Marr essentially defines ML as a subset of AI and one that is currently very popular.

Marr writes that:

“Artificial Intelligence is the broader concept of machines being able to carry out tasks in a way that we would consider ‘smart’

and

Machine Learning is a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves.”

On the other hand, with traditional expert systems, developers must include all possible (or at least all known) scenarios and decisions ahead of time and algorithms sift through lists of possible answers to come up with the optimal decision. AI systems, on the other hand, are not provided with huge mega troves of information, but instead are endowed with the capacity to learn from experience and, in some cases, adapt to entirely new situations, and respond in previously unimagined ways. However, the ML approach, if it does not provide any expert knowledge or guidance in interpretation of various situations, is not totally realistic. After all, when human babies learn they are greatly influenced by parents, siblings, other relatives, teachers and mentors. Consequently, humans will interpret a specific scenario in many different ways … the difference between facts and beliefs perhaps! A concern is that ML systems will likely be biased in favor of the beliefs of their designers and developers, who may subconsciously imbue the ML system with their own views.

One can imagine an AI spectrum, with expert systems at one end and ML systems at the other. It is likely that these systems will move towards the center over time and that AI systems will contain both expert knowledge and the ability to adapt as they learn what works and what doesn’t. The risk here is the possibility of absorbing nefarious intentions.

Expert systems, while they might become very complicated, can be fully described, and therefore they may be subjected to traditional software assurance and security testing methods, and, at least theoretically, all errors can be found and corrected.

On the other hand, AI systems are complex, meaning that they can enter into unpredicted and sometimes unpredictable states. How can one test such systems? Aha! say the cybersecurity researchers, we just need to apply our AI security scanners and monitors against their AI systems. OK. But what if the “good guy” and “bad guy” AI systems conspire on their own to cooperate rather than be adversarial, or vice versa? And further, what if they agree among themselves not to disclose to their human overseers what they have chosen to do? Then, what do we have? We have intractable AI systems acting in perverse, if not openly hostile, ways. Such AI systems may be smart enough to reflect secure and safe behavior to their overseeing “masters,” but be plotting to do their own things behind their backs (or in full view!).

Why might this happen? Well, for one thing, many developers view information security and privacy as hindrances to progress that create delays in time-to-market (or “time-to-value”). It is not beyond such developers to subjugate security and privacy to performance and cool features. We see it all the time as products are launched, only to be recalled or patched to fix some serious security or safety flaw.

Now, with AI we have a significantly different situation. The developers of AI systems obviously have their own viewpoints, be they social, political, economical or psychological … everyone does. And we have seen a flagrant disregard for personal privacy in the designs of some social networks, virtual assistants, and the like. To be fair, this does not stem as much from evil intentions as it does from constant pressure to come up with software and devices that will capture public attention, sell in huge numbers, and enrich the coffers of the companies providing these features. Of course, the suppliers of these systems do not generally prevent the use of these features by those with evil intentions unless someone in authority calls them to task. And there’s the rub. Devices, such as guns, which can be used for good activities (hunting, self-defense, law enforcement, military action), can also be used for bad activities (hold-ups, murder). One can say that the devices themselves are neutral, but there are also ways to prevent misuse, be they the implementation of biometrics, limits on features, restrictions on power, and user education and certification. It would be considered extreme to require background checks on all those wishing to buy computers or smart phones, but not so much for autonomous vehicles, although I haven’t seen much discussion about whether “non-drivers” of autonomous vehicles would be required to pass tests and obtain licenses.

In any event, the above measures, which are feasible even if not generally desirable, become virtually impossible to evaluate and implement in an AI world, with the result that the biases and prejudices of designers and developers will likely be baked into AI systems without the knowledge or understanding of users, lawmakers, regulators, law enforcement, intelligence agencies, or the military. This may be acceptable if everyone had agreed ahead of time about which features are permitted and which should be omitted. But that isn’t happening. One can always hope that AI systems will be benign and supportive, but what if they are malignant and destructive? Then you must hope that the latter can be deactivated before doing too much damage. Good luck with that. It won’t happen unless there is a huge updating of our legal, regulatory and enforcement systems, which is not even on the horizon.

So, what should we do? In the first place we need to ensure that decision-makers, including lawmakers, regulators, and law enforcement, are educated as to the dangers of unfettered AI systems. Then we must establish global laws, regulations, policy and standards … and support their enforcement. And finally, we need deterrents that are sufficiently onerous so that the development and use of AI systems remain virtuous rather than vicious. Given the ignorance or nefarious intentions of those who might influence the direction and use of AI, there is clearly much work to be done to try to get ahead of the AI cybersecurity wave. This is work that no one seems eager to take on, which is understandable given the enormity of the task. Understandable, yes, but not acceptable.

*** This is a Security Bloggers Network syndicated blog from BlogInfoSec.com authored by C. Warren Axelrod. Read the original post at: https://www.bloginfosec.com/2018/06/18/is-ai-for-or-against-cybersecurity/