John Markoff wrote a column “Shhh! That Helpful Robot May Pose a Security Risk” on page B6 of the March 2, 2017 New York Times, in which he warned that the security firm, IOActive, had uncovered “[s]ignificant security flaws … in an examination of six home and industrial robots,” immediately conjuring up battalions of rogue robots as in the movie “I, Robot.”
Incidentally, the “I, Robot” movie is based on an Isaac Asimov book of the same name. Asimov set down three laws of robotics as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Discussing robots connected to the Web, Markoff quoted Cesar Cerrudo, IOActive’s chief technology officer, as saying “We call it an internet-of-things with arms and legs and wheels … The surface of attack is huge. Each robot has multiple ways it can be compromised.”
So, if we refer to the population of Internet-connected cars as the IoTA, Internet-of-Things Automotive, as I did in a recent column, let’s call the population of robots connected to the Internet as the IoTR, Internet-of-Things Robotic. Whereas the latter are “things” with arms, legs, and wheels. The former are “things” with steering, acceleration, braking, and wheels. For our purposes, wheels facilitate moving around, but cars could have legs, wheels or just air between them and the ground. In any event, autonomous vehicles are becoming robots that look like cars, but they don’t have to. The IoTA can be considered a subset of the IoTR, which in turn is a subset of the IoT. While we focus on self-driving cars derived from the designs of current vehicles, if we define them as autonomous vehicles of transportation, we open up a whole new world of possibilities. After all, early cars were called “horseless carriages,” so that the modern equivalent would be “driverless cars,” but, as we saw, the former definition misled many as to how such vehicles would proliferate. The “self-driving cars” nomenclature could also be misleading and narrow the thinking about what could be and will be.
For example, in the movie “Star Wars,” you have these 20-meter-tall attack vehicles with four legs. They are known as “All Terrain Armored Transports” or “AT-AT walkers”, and were used by the Imperial ground forces as transport and combat vehicles. Star Wars also brought us the Landspeeder, which were vehicles without wheels, levitated a foot or two above the ground. So, let’s not get overly enamored of wheeled vehicles travelling along traditional roadways, since you may be moving around in wheel-less vehicles, unconstrained by traditional roads, soon.
Now, back to the IoTR … Let’s just say that there are things, connected to the Internet, that can propel themselves or be propelled, and those that cannot move of their own accord. They all likely have cybersecurity issues, as Markoff reported for robots and many researchers have reported for the IoT, but things that move add another dimension to safety considerations as they can crash into non-robotic things (other cars, buildings, telegraph poles) and, when many robotic vehicles have been deployed, they might crash into one another.
What it all comes down to is that we continue to roll out devices (intelligent assistants, smart thermostats, robots, road vehicles) without due consideration of cybersecurity and security-related safety threats. All of these devices can be hacked from afar and, if safety-critical, may be taken over and made to behave in dangerous ways. I addressed the fundamental issue of the importance of including BOTH security and safety requirements in all such systems in my book “Engineering Safe and Secure Software Systems” (Artech House, 2012), but few appear to have taken to heart the warnings and lessons of the book and of other researchers’ related writings. Until and unless we get cybersecurity professionals and safety engineers working together, the dangers described by Markoff will be pervasive, and dealing with them, after the fact, will be a nightmare. We must build cybersecurity in from the start, in order to avoid the enormous costs of retrofitting it into existing systems … which we may be forced to do anyway. So, let’s get a jump on this, and come up with standards that force all such systems to be certified at a required level of security before they can be deployed, not after the fact.
This is a Security Bloggers Network syndicated blog post authored by C. Warren Axelrod. Read the original post at: BlogInfoSec.com