SBN

Cybersecurity and Safety of AI and Robots

The article in The New Yorker of May 14, 2018 by Tad Friend with the title “Superior Intelligence: Do the perils of A.I. exceed its promise?” describes two schools of thought with respect to concerns that researchers have about both ANI (artificial narrow intelligence) and AGI (artificial general intelligence) systems, as follows:

“Usually, those who fear what’s called “accidental misuse” of A.I., in which the machine does something we didn’t intend, want to regulate the machines, while those who fear “intentional misuse” by hackers or tyrants want to regulate people’s access to the machines.” [Emphasis added]

As described in my book, “Engineering Safe and Secure Software Systems” (Artech House, 2012), the former refers to safety regulations, which are intended to prevent cyber-physical systems from harming humans and/or the environment, and the latter points to security regulations, which are designed to protect systems, networks and data from nefarious outside influences. As recommended in the book, design and development teams must include both cybersecurity experts and safety engineers who together will likely understand the whole picture and create security and safety requirements that ensure that software-intensive systems cannot be damaged or do damage, whether intentional or accidental.

AWS Builder Community Hub

That’s all very well, although still extremely difficult, in the relatively static world of regular software development and implementation—even continuous development and deployment—but it doesn’t meet the challenge of AI systems. And that is for one simple reason: the behavior of future iterations of AI systems, especially AGI (artificial general intelligence) systems, are totally unpredictable. Furthermore, it is impossible to anticipate all possible contexts in which the systems might find themselves operating, as is currently being demonstrated through accidents of autonomous cars.

We are not even good at testing all foreseeable situations in relatively static systems, despite the appeals by me and others for functional security testing and functional safety testing. These methods are used to test for all possible paths through software systems to ensure that harm cannot be done to or by the systems. Even for continuous development, systems are fixed and known for particular (albeit short) periods of time, so that testing may well be complicated (i.e., determinate), but it is not complex (i.e., indeterminate).

However, when one must deal with continuously changing and adapting AI systems, where one may be entering unanticipated and unknowable worlds, testing assumes complexities that we haven’t experienced before. And therein lies the problem. Perhaps we can be smart enough to develop adaptive AI testing systems that continually monitor the evolving states and behaviors of true AI systems and check against some acceptable standards before the systems go rogue. But, knowing how these things have worked in the past, I have little to no confidence that this will happen. After all, it is more difficult and much more expensive and time-consuming to develop AI testing systems than to create the original AI systems to be tested. And who’s going to fund it if they are not forced to? And we know that lawmakers and regulators won’t grasp the issues that these AI systems present … they were already bamboozled when Mark Zuckerberg testified before Congress about Facebook’s role in the 2016 presidential election. You could see the lawmakers’ eyes glaze over when Zuckerberg reassured them that “algorithms” and “artificial intelligence” will be brought to bear on the problem. Who is going to verify that these highfalutin techniques will help solve the problem? Who will ensure that they cannot be used by nefarious actors to do worse damage while reassuring the population that everything is good? Remember HAL 9000’s soothing voice in the movie “2001: A Space Odyssey” (released 50 years ago!) as “he” systematically murdered the astronauts one by one.

*** This is a Security Bloggers Network syndicated blog from BlogInfoSec.com authored by C. Warren Axelrod. Read the original post at: https://www.bloginfosec.com/2018/06/04/cybersecurity-and-safety-of-ai-and-robots/