SBN

Self-Driving Cars (We Are The Robots)

Robots will rule us all. I feel that’s already been established by more sci-fi writers than can be credited in one podcast.

You may have recently seen that an Uber self-driving car killed a pedestrian in a headline grabbing frenzy that re-awakened the humanity in all of us.

Questions like

What if… insert dystopian human based car fiasco here where AI (artificial intelligence) is deciding the fate of human lives with alarmingly regularity. During my recent discussions I found some common themes where these infinite ethical conundrums often came back to the questions of responsibility. Who do you blame if AI kills one of our humans AND… assuming the AI is forced into making a decision which means one life over another how does it make that choice? Heavy stuff folks.

Doing my best and simplifying my position, I’ll start with defining for the curious (hey that’s why you listen to podcasts isn’t it!) how self-driving cars work.

BUBBLE HEAD!

If you didn’t already know, the self-driving or autonomous cars are a bit like the Terminator. They are utilising a programmatic paradigm called AI or Artificial Intelligence and it’s cohort Machine Learning. Hollywood loves to present AI to us as humanoid robots like the the Lost in Space Robot (simply called Robot), the aforementioned Terminator (probably the closest example of our future) or the more arty Ex Machina (great film), which I’ll come back to at the end. The reality is that our first mainstream taste of real AI are these wacky and occasionally murderous self-driving cars. They have now gained enough media exposure that pub-quiz gurus and the beer analyticals have already been forced to consider the ethics of their decision making.

Let me start by saying that I’m NOT an AI or machine learning expert. I have friends and colleagues who are and I’ve been bouncing ideas of them over the past few weeks. I do however have a solid history in real-time systems which allows me to understand coding reactions to external input such that a system/computer/machine can interact in what is considered to be “the present”. These cars have a lot of that kind of code going on. Real-time systems take pride in being deterministic which simply put means, they always react the same way to the same stimulus in the same amount of time. They are wonderfully predictable and not at all “learning”.

There is very little about modern AI in combination with machine learning that is simple or deterministic. Very much over-simplifying such that I hope we can all understand it, imagine AI as having a decision matrix composed of various decision models trained independently to make predictions and therefore make decisions. Ok, that wasn’t so simple but. Let’s consider the decision matrix. The machine learning aspect is that this matrix, while starting with a repeatable outcome, is not necessarily creating a fixed outcome. It runs in combination with something often called adaptive boosting which serves to boost models/decisions that are correct and lower the ones that are not. It can even combine groups of lower ranked decisions to win over more highly ranked ones. It’s some clever shiz and it is effectively “learning”.

It’s exactly what we super advanced humans do in that we learn from our mistakes and make better decisions going forward.

It’s probably also sufficient to say that at the present time a car trained to drive in New York City wouldn’t stand a chance if it was dropped into Mumbai where a human driver would be better able to adapt… for now. The combination of AI and machine learning had proven capable to drive on streets, largely in America, which are predictable.

Who’s at the Heart of The Self Driving Car Revolution

I don’t use that term “revolution” lightly. The leaders in this world aren’t your Fords, Hondas and Chevys but instead your tech giants like Google who launched their dedicated brand Waymo having hired some of the DARPA engineers who were responsible for kicking off the whole idea around 2004. Even more disruptive are the modern taxi firms like Uber and Lyft using the combined help of car companies like Volvo. Finally Tesla is a obvious choice with Elon Musk not letting any tech get the best of him. In fact Tesla recently added auto-pilot mode to some of their cars.

Tesla Autopilot Mode

It’s no surprise that with the addition of auto-pilot, Tesla has also had a recent death with a customer in the car being a bit too cocky with the new auto-pilot mode. It’s a bit like when cruise control was first added to cars and people didn’t really understand that they shouldn’t let go of the wheel. Auto-pilot IS self-driving BUT Tesla were quite clear when it was released that it is “beta” and that the driver should stay alert. While it does do some impressive driving in predictable weather and environments like modern american highways, it isn’t so smart that it doesn’t get confused and there are plenty of videos on Youtube of people trying to mess with it. It seems inevitable that somebody would quickly abuse the feature to their own fatal chagrin.

How Are Self-Driving Cars Getting Information

While we rely pretty much completely on our eyes and to some extent our ears for information, the self-driving cars have a complex array of sensors, cameras and something called LIDAR (Light Detection and Ranging). Combine Radar and Lasers and you have LIDAR. This is at the heart of information gathering. It creates a laser generated 3D map of the car’s surroundings at, that’s right, the speed of light! In fact, advances in LIDAR are moving towards not just mapping what is “visible” but also hidden objects and objects around corners.

FRIKKIN LASERS!

These autonomous vehicles will in the foreseeable future be extremely good at driving. Dare I say better than us. Let’s look at a fun example that’s in the works at the moment.

The Self Driving Formula E is in concept form and it’s called Roborace. The company at the heart of it is starting the world’s first motor-sports series for driver-less cars. The concept of Formula E (electric cars only please) except the race isn’t down to the best driver, but the best coders who can design the best AI to drive the car. It’s in very early stages but … would people watch it? So far very few of the companies working on the cars have picked up the challenge to be a part of this new sport but it feels like the future. Something I found fascinating that stayed with me was how a car could drive itself around a track a few times and the sensor data from that, could be used to train other cars in advance of them having never driven it. Like a simulator you can inject straight into the brain… kind of like The Matrix!

HALF WAY??

Self-driving cars are one of the modern examples of AI and machine learning that we can attempt to wrap our heads around because… we drive cars. AND… we, for the most part, think we’re great at it. Frankly and controversially, I have more faith in AI than I do humans to react in what appears to be a pragmatic way regardless of the lives as risk, even in this immature phase of self-driving cars.

1.5 Million autombile related deaths in the USA in 2017. Great work humans!

The Blame Game

Let’s get back to the ethical questions. What if a self-driving car kills a human. Let’s leave the decision making aspect for the moment.

I had an interesting conversation about some of this tech recently where the word culpability was used. I found it an interesting word choice as it implies wrong doing. Blame doesn’t change the outcome of our ethical dilemma but it seems that many people aren’t satisfied until we can be sure that errors can be attributed and perhaps recompense can be confirmed. In terms of the cars, they will still be the responsibility of the company and so are the employees that worked on making it. Think of it like if a plane crashes due to anything from mechanical failure, maintenance, or perhaps the auto-pilot got confused, corporations are taking this seriously as mistakes will be expensive in many so ways.

That’s actually a pretty good analogy because if you think you haven’t been inside a self-driving vehicle think again. If you’ve taken a flight, you’ve been in one. The pilots are present because yet, they could fly the plane the entire distance much like you can drive a car but technically speaking, like Tesla’s auto-pilot, the plane for the a large part of your journey flies itself. Let’s be clear though… the path is generally a lot clearer at 30000 feet and even then, autopilots on the latest generation of fly-by-wire jets with quadruple redundancies have been known to go haywire, requiring quick thinking from actual pilots. Additionally, airplane pilots have considerably more training and awareness than the guy who was killed in a Tesla crash because he was watching a Harry Potter movie whilst in auto-pilot.

Are car autopilots the equivalent of airplane autopilots?

Mistakes, unlikely or not are inevitable and that absolutely needs to be addressed and I believe answers are possible. The blame game must be played if deaths are involved. Because we’re human and we love a good bit of blame don’t we.

The problem with being blame loving humans and AI is that there won’t a driver and nor will there be one single person or programmer involved. It will not be a single line of code or algorithm clearly to blame. Interestingly humans do love to blame or attribute culpability to other humans. That generally seems to be where we are satisfied, unlike the discussion about cars were blame has theoretically been extended to the coder, the company and even the CEO. Applying that rational to humanity (if you want to get existential), if a human killed somebody you could theoretically blame the high school bully that contributed to their psychological instability, or perhaps the parents who raised and provided the genetic and environmental conditioning for that person, or if you look at it from a ancient creationist perspective, you may blame Odin (other gods are available) for designing the flawed product in the first place.  After that comment I’m making a special hate mail email address just to handle reactions.  It’s… [email protected]

We humans have the self-awareness to take responsibility upon ourselves which is convenient for blame and emotional care more than financial recompense. What people don’t seem to like is the potential for a decision on life being made programmatically regardless of whether the decision was better or worse.  There is a trend for humans to see ourselves as special, and that our decisions are somehow not just programmatic or, the human equivalent, sub-conscious.

Self-driving cars are tools.  Like a chainsaw.  They will still be the responsibility of the company that made them and so are the employees that worked on making them.  Corporations are taking this seriously.

Cast ourselves 50 years in the future, potentially in our lifetime. Imagine the potential that the car’s AI be conversant enough to potentially post process it’s choices and machine learn immediately from them to the extent that it admits it is at fault without investigation. Will that make us feel better when it says sorry?  Let’s go further and ask ourselves how long after AI considers itself to be simply I and therefore self-aware will it be before we acknowledge it to be so?  Go see the movie Ex Machina.

The Big Ethical Dilemma

Ignoring the arguments that these cars in a mature state, are unlikely to be in a situation to have to make a decision about endangering one life versus another, for example to sacrifice the passenger to save the school kids, we humans can’t help but project this onto these cars and ponder.  How could these decisions possibly be made by an AI and how they could be programmed with some form of base ethics?

Jean-Francois Bonnefon at the Toulouse School of Economics in France said it correctly that there is no right or wrong answer to these questions. Public opinion will vary and most likely disagree. His team did work with experimental ethics, posing ethical dilemmas to a large number of people to see how they respond. One way to approach this kind of problem is to act in a way that minimizes the damage. Kill one person is better than killing two.

However, if people think self-driving cars are programmed to sacrifice their owners, they won’t buy them. If you recall the statistics about road deaths earlier (1.5 Million ish), then actually people will make a decision which will mean they are MORE likely to die because ordinary cars are simply way more hazardous.

The gist of Jean-Francois Bonnefon’s results was that people had a rather utilitarian response and were often happy to sacrifice the passengers to save the group. Until, of course, the questions were rephrased to indicate that they themselves were the passenger.

And therein lies the paradox. People are in favor of cars that sacrifice the occupant to save other lives—as long they don’t have to drive one themselves.

All that said, given the complexity of machine learning and AI,  it’s worth asking whether the car’s consistently changing decision matrix could, on a long time-line of machine learning, change it’s behaviour with regard to these ethical decisions.  Can we assume that the ethical aspect of the car’s programming be retained deterministically and not be altered by it’s own learning algorithms.

https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/

https://www.kdnuggets.com/2017/06/machine-learning-algorithms-used-self-driving-cars.html

Why Bother?

Backing up… technology is moving in this direction because of the advantages of the bigger picture.
Starting with the environmental benefits alone, cars would not need to be purchased anymore. We have a tendency to purchase the car for the worst case scenario of long drives with lots of passengers (The top 5 selling cars in the USA were either pickup trucks or SUVs) and then we we drive those cars alone for short distances. In an Utopian self-driving world you could order a car in a journey specific way. Small cars for quick journeys and large for large without needing to own or maintain it. The right car for the journey would be environmentally sound and would be cheaper. For services like pizza delivery you don’t need to tip the driver. Boom!

All that aside, the biggest cause of automobile deaths is humans, not cars. If we applied good sense to driving we should simply ban humans from cars completely. It’s a grandfathered in technology that would be considered absurdly risky if pitched in something like Dragons Den or Shark Tank today. Kind of like cigarettes. Can you imagine pitching a big box that you get inside which travels up to 100mph, under your full control whilst only a few feet away from hundreds of other boxes.  It’s ok though, because there are painted lines to keep you apart.  Oh yeah and it’s powered by a controlled explosion of fossil fuels.   Show me the money!

15,000 in the USA were killed by guns last year, a half million by smoking and more than a million by cars. Food for thought. We should do our best to not turn a self-driving car into a witch hunt when this technology will very likely change the face of mortality statistics significantly.

A final look forward… Hive minded vehicles

In the “not so distant future” self driving car world, we wouldn’t have any human driven vehicles AND… the cars would communicate with each other to establish the best route, adjust speeds, suggest travel times etc. To be fair, my puny self-aware brain is not even coming close to guessing the advantages of a hive minded car scenario. What it would mean is that, unless some uber hacker ( see what I did there ) compromised the car network, the theory is that we would be considerably safer yet again. Cars would, like humans walking in busy streets, cooperate and move in silent unity and ideally never(?) crash. Yours truly looks forward to seeing that kind of perfection in action but it unfortunately may not be in my lifetime. Not with all these humans still driving around.

The post Self-Driving Cars (We Are The Robots) appeared first on Codifyre.

*** This is a Security Bloggers Network syndicated blog from Codifyre authored by Stephen Giguere. Read the original post at: https://codifyre.com/culture/self-driving-cars/