Two of these men are among the greatest philosophers of ethics the world has ever seen. But in my opinion, they had it easy. Plato and Kant were thinkers. Their ideas were important, but they never faced imminent and world altering decisions based on their ethical principles*. The third is Elon Musk, the founder and CEO of Tesla. Although not a traditional philosopher, one might argue that his ethical stances have far greater implications than those of Plato, Kant, or any great philosopher in human history for that matter.
What I’m talking about is autonomous vehicles. Last week’s lecture got me thinking a lot about the implications of artificial intelligence in driverless cars, and for all of the curiosities of its developments I find none more intricate than the programming of their ethical codes.
Cars are inherently dangerous. Two tons of metal moving at sixty plus miles an hour is not safe. The world is filled with uncertainty and no matter how far we advance our technology we will probably never reach a level where nothing will go wrong. Therefore, autonomous vehicles will one day need to be prepared to reacted to random occurrences and at some point, to reach Level 5 automation in vehicles, programmers are going to have to create a set of guidelines for artificially intelligent vehicles to follow in preparation for collisions that will sometimes lead to unavoidable fatalities.
So, no, I am not trying to say that Elon Musk himself bears the weight of ethics in autonomous vehicles. There are certainly companies ahead of Tesla in the developing market, but some might say he’s the face of the industry right now, so lets roll with it for the sake of this blog.
There are two major issues that I find extremely important as we face the development of programming ethics. I think each issue is described well by a metaphor, the second of which you will likely be familiar with, but the first is my own.
Step 1: Reintroducing the Cougar
The first major hump we will have to overcome comes to us in the wake of two fatal crashes of autonomous vehicles that have been in the news recently for Uber and Tesla. It is going to be natural for people to want to move away from self-driving vehicles after these crashes, but they shouldn’t. This not a problem of programming ethics in particular, but more an issue with the shortsightedness of human ethics as a whole.
So for the metaphor: A 2016 publication by scientists at the University of Washington found that reintroducing eastern cougars to 19 states in the US to hunt white-tailed deer, a natural prey, the deer would have far less collisions with vehicles, and could prevent 155 human deaths and 21,400 human injuries, and save $2.3 billion over the course of 30 years. But it will never happen. Though far less than deaths by crashes with deer, just under 30 people would be mauled to death by a cougar in that same period. Not a pretty way to go. Reintroducing the cougars would certainly net far less deaths and expenses for drivers, but people just can’t stomach the idea of dying at the hands of a cougar (fair enough). The cougars are our driverless vehicles and we, the human drivers, are the deer. If we are able to fully integrate driverless cars, at least theoretically, fatal accidents would become almost nonexistent. Imagine a city where every car is programmed to run in collaboration with every other. There would be no traffic, no street parking, and no stoplights.
Now of course there will still be crashes. Cougars will still maul people and certainly cannot catch every deer that is about to run out in front of a car, but there would certainly be less. But that is both the beauty and issue with human nature: we are very shortsighted and emotional beings. In our minds, it is wrong to send a car (or a cougar) out onto the world that we know might kill people. The driver error accidents and the collisions with deer seem much more incidental, and are easier to deal with for us. I think that’s an ethical fallacy our brains need to reprogram before we can immerse ourselves in an ecosystem of autonomous vehicles. We have to learn to focus not on how people are being killed but how MANY people are being killed by this innovation. The hope is that in the long run it would be exponentially less.
Reading those last few lines back I sound kind of heartless. I recognize that this idea is much easier said than done. I don’t think the idea of intentionally sending a potential deathtrap onto the road is something I can stomach, but that’s part of our emotional nature, and I guess those are the ethics that make us human.
Step 2: The Trolley Car Experiment
The second metaphor is a much more common one, and a staple of BC freshman portico classes:
In the experiment, one imagines a runaway trolley speeding down a track which has five people tied to it. You can pull a lever to switch the trolley to another track, which has only one person tied to it. Would you sacrifice the one person to save the other five, or would you do nothing and let the trolley kill the five people?
It’s a similar argument to reintroducing the cougar, but I want to focus on programming ethics into the vehicles themselves. I don’t really think I need to explain the metaphor here. The trolley is the Tesla Model 3, and the person behind the lever is Elon Musk, our trusted philosopher. To have a completely autonomous car, programmers like the ones at Tesla will have to decide in the most premeditated fashion whether to kill one person, or do nothing and kill 5. The programmers will have more complicated questions to answer than this though. For example, will the car be optimized to promote human welfare, or will it always do its best to protect its passengers? Imagine sitting in a car knowing it could intentionally kill you if it registered that it was best for the common good. Could the decision be left in the consumer’s hands? Will we one day have ethics settings on our cars’ dashboards? These are heavy questions, and ones that I am not prepared for quite yet.
I have a lot of confidence in autonomous vehicles, but my hope is that there are social scientists and ethicists working alongside the programmers from the very start to make sure that when the time comes, we are ready.
*Note: I could be totally wrong about that Plato/Kant comment. I’m not a philosophy major. Sorry if my ignorance offended anyone.