Plato, Kant, Musk

Two of these men are among the greatest philosophers of ethics the world has ever seen. But in my opinion, they had it easy. Plato and Kant were thinkers. Their ideas were important, but they never faced imminent and world altering decisions based on their ethical principles*. The third is Elon Musk, the founder and CEO of Tesla. Although not a traditional philosopher, one might argue that his ethical stances have far greater implications than those of Plato, Kant, or any great philosopher in human history for that matter.

What I’m talking about is autonomous vehicles. Last week’s lecture got me thinking a lot about the implications of artificial intelligence in driverless cars, and for all of the curiosities of its developments I find none more intricate than the programming of their ethical codes.

giphy8Cars are inherently dangerous. Two tons of metal moving at sixty plus miles an hour is not safe. The world is filled with uncertainty and no matter how far we advance our technology we will probably never reach a level where nothing will go wrong. Therefore, autonomous vehicles will one day need to be prepared to reacted to random occurrences and at some point, to reach Level 5 automation in vehicles, programmers are going to have to create a set of guidelines for artificially intelligent vehicles to follow in preparation for collisions that will sometimes lead to unavoidable fatalities.

So, no, I am not trying to say that Elon Musk himself bears the weight of ethics in autonomous vehicles. There are certainly companies ahead of Tesla in the developing market, but some might say he’s the face of the industry right now, so lets roll with it for the sake of this blog.

There are two major issues that I find extremely important as we face the development of programming ethics. I think each issue is described well by a metaphor, the second of which you will likely be familiar with, but the first is my own.

Step 1: Reintroducing the Cougar

The first major hump we will have to overcome comes to us in the wake of two fatal crashes of autonomous vehicles that have been in the news recently for Uber and Tesla. It is going to be natural for people to want to move away from self-driving vehicles after these crashes, but they shouldn’t. This not a problem of programming ethics in particular, but more an issue with the shortsightedness of human ethics as a whole.

15cougar-master768So for the metaphor: A 2016 publication by scientists at the University of Washington found that reintroducing eastern cougars to 19 states in the US to hunt white-tailed deer, a natural prey, the deer would have far less collisions with vehicles, and could prevent 155 human deaths and 21,400 human injuries, and save $2.3 billion over the course of 30 years. But it will never happen. Though far less than deaths by crashes with deer, just under 30 people would be mauled to death by a cougar in that same period. Not a pretty way to go. Reintroducing the cougars would certainly net far less deaths and expenses for drivers, but people just can’t stomach the idea of dying at the hands of a cougar (fair enough). The cougars are our driverless vehicles and we, the human drivers, are the deer. If we are able to fully integrate driverless cars, at least theoretically, fatal accidents would become almost nonexistent. Imagine a city where every car is programmed to run in collaboration with every other. There would be no traffic, no street parking, and no stoplights.

Now of course there will still be crashes. Cougars will still maul people and certainly cannot catch every deer that is about to run out in front of a car, but there would certainly be less. But that is both the beauty and issue with human nature: we are very shortsighted and emotional beings. In our minds, it is wrong to send a car (or a cougar) out onto the world that we know might kill people. The driver error accidents and the collisions with deer seem much more incidental, and are easier to deal with for us. I think that’s an ethical fallacy our brains need to reprogram before we can immerse ourselves in an ecosystem of autonomous vehicles. We have to learn to focus not on how people are being killed but how MANY people are being killed by this innovation. The hope is that in the long run it would be exponentially less.

Reading those last few lines back I sound kind of heartless. I recognize that this idea is much easier said than done. I don’t think the idea of intentionally sending a potential deathtrap onto the road is something I can stomach, but that’s part of our emotional nature, and I guess those are the ethics that make us human.

Step 2: The Trolley Car Experiment

The second metaphor is a much more common one, and a staple of BC freshman portico classes:

In the experiment, one imagines a runaway trolley speeding down a track which has five people tied to it. You can pull a lever to switch the trolley to another track, which has only one person tied to it. Would you sacrifice the one person to save the other five, or would you do nothing and let the trolley kill the five people?

It’s a similar argument to reintroducing the cougar, but I want to focus on programming ethics into the vehicles themselves. I don’t really think I need to explain the metaphor here. The trolley is the Tesla Model 3, and the person behind the lever is Elon Musk, our trusted philosopher. To have a completely autonomous car, programmers like the ones at Tesla will have to decide in the most premeditated fashion whether to kill one person, or do nothing and kill 5. pnmbz3The programmers will have more complicated questions to answer than this though. For example, will the car be optimized to promote human welfare, or will it always do its best to protect its passengers? Imagine sitting in a car knowing it could intentionally kill you if it registered that it was best for the common good. Could the decision be left in the consumer’s hands? Will we one day have ethics settings on our cars’ dashboards? These are heavy questions, and ones that I am not prepared for quite yet.

I have a lot of confidence in autonomous vehicles, but my hope is that there are social scientists and ethicists working alongside the programmers from the very start to make sure that when the time comes, we are ready.

 

*Note: I could be totally wrong about that Plato/Kant comment. I’m not a philosophy major. Sorry if my ignorance offended anyone.

flippantannualgopher-max-1mb

9 comments

  1. Jobabes121 · ·

    Jake, this is a wonderful post my man. Great metaphors used, and I believe they are suitable to describe autonomous vehicles. I also agree to your point where people are A LOT more concerned about that one particular death from autonomous vehicles than so many deaths occurring from normal traffic and driving by human drivers. It may be true that autonomous vehicles have caused far less death than human driving. But because it’s such a new technology and people have incredibly high expectation on its ability to avoid deaths, the issue of accidents resulting from autonomous vehicles impacts the media a lot more than typical deaths from driving. It makes sense, yet rationally speaking, autonomous vehicle seems to be a lot safer than drunk drivers, teenagers who love to speed with careless driving, and hurried, aggressive drivers that harm other drivers.

    Your point about social scientists working alongside the autonomous vehicle developers is fascinating and sounds like a must. Until the public accepts that it’s a safe, viable alternative to our current driving system, autonomous driving will still remain as a “prototype” that cannot be further spread to save more lives. I believe the prototyping and test must continue, but the social scientists & image making must have as much emphasis throughout the development of autonomous vehicles. If they do not overcome this hurdle, I doubt this technology will ever replace driving entirely in the future regardless of the technology’s superbness.

  2. thebobbystroup · ·

    I find it ironic that you mention Kant and then go on to say, “We have to learn to focus not on how people are being killed but how MANY people are being killed by this innovation.” Seems to be you are operating (for better or worse) on more of a Utilitarian view of ethics vice a Categorical Imperative.

    After reading the hyperlinked article about levels 1-5 of autonomy, I wondered what level 6 might look like. If level 5 is equal to what a human could do, then maybe level 6 would be above and beyond. Perhaps a car would calculate ‘the best for the common good’ and change speed to optimize traffic flow (perhaps going slower than desired), incurring risk to the passengers, or change engine output to minimize environmental impact. I doubt many would ever buy a car that could potentially choose to kill them unless it was mandated by law, but it is an interesting concept. Food for thought: if a potential murderer had a ‘level 6’ vehicle, might it drive them to jail instead (see: Minority Report)?

    1. realjakejordon · ·

      I appreicate you mentioning my use of Kant. I realized that I wrote this with a strong Utilitarian point of view. What I really didn’t consider until I took the MIT moral machine mentioned by @jamessenwei is that a Categorical Imperative should probably play a pretty important role in the consideration of these ethics, namely as far as pedestrians following crosswalks go. If someone is crossing illegally, should the passenger be punished for the action? What if it’s a child? These are complicated questions, and further intrigues me as to what decisions programmers are ultimately going to make.

  3. graceglambrecht · ·

    awesome viewpoint. I think one of the biggest things that hold back autonomous work and even innovation in general can be that human element and intention, as you mentioned. we discuss a lot in my philosophy class the idea of value sensitive design, and if the people developing these technologies have a responsibility to look and the potential consequences of that tech, whether intentional or unintentional. I think ethics, as a human construct, will always play an important role in the future developements in tech.

    I do agree with you that the humans aren’t always great at making choices. In both examples you gave, it’s easier justifying doing nothing rather than doing something that while stopping the death of more people, directly affects another group of people. Human beings have a difficult time with responsibility and holding blame or guilt. Super interesting article and I’m hoping to autonomous vehicles in the future!

  4. jamessenwei · ·

    I tweeted about his post because it really reminded me of MIT’s moral machine (http://moralmachine.mit.edu/). I agree that we ourselves need to “reprogam” our own brains to see the morality of driverless cars themselves but another huge problem will be programming the morality of these cars. When these cars are on the road, they become active moral actors capable of killing and saving people. This morality needs to be implemented in our cars because they will inevitably be faced with hard moral decisions. Should it value the lives of its occupants over the lives of others on the road? These are tough to answer on our own, let alone for driverless cars. Before we program morality into our cars, we must first ask ourselves what is the most more decision.

  5. Nice way to bring out the ethics inherent in all AI (and computer) design.

  6. RayCaglianone · ·

    Really awesome post Jake, definitely brought me back to portico! Your point about the programming of the self-driving cars taking on a utilitarian approach to safety was really thought provoking especially. I can’t imagine that many people are going to want to get behind the wheel of an autonomous vehicle if there is even a shred of a chance that the car might not prioritize their safety. That would take a truly utilitarian outlook from society as a whole, and I don’t think that the U.S. is anywhere close to that as stands. The consumer is generally self-interested when they are making a purchase decision; sure, someone might consider larger social benefits when they are browsing for a product, but generally they are going to remain primarily self-interested (can econ majors back me up on that?) And taking a look at Maslow’s Hierarchy of Needs, safety is one of those first needs to fulfill. Even if the chance the car acts against the customer is .0000001%, it’s going to take a far less self-interested society to embrace that sort of radical change.

  7. Tully Horne · ·

    I think it was brave of you to go out on a limb and make this comparison. Although I am not a philosophy major either, I see where you are coming from. I think both Kant and Plato were incredible in their own way because, although they were just thinkers, they were some of the first to think in the ways that they did and they were able to impact people for thousands of years after (and still do). Musk, although not yet around long enough to show an impact on others and the way they think and pursue ideas, is definitely much more of a do-er.

    In my opinion, this ethical question is one that will never completely be solved. The train example is the best one because there is no good answer for what should happen. The only everyone-wins solution would be if nobody died. The only way to fix that is to implement significantly improved crash avoidance methods or crash safety devices. But that is the whole basis of ethics. It is always an on-going discussion, and to get things done at some point you have to take a stance. I think the stance you are presenting is “releasing the cougar”, or the autonomous car, and I agree with you. The potential positives that come with this technology far outweigh the negatives. People just have to get themselves over the mental hurdle of accepting that death will likely always exist to some extent in automobile activity.

  8. markdimeglio · ·

    This is an awesome article! Very cool idea and is something that people are not thinking a lot now but soon the general public will likely be very aware of this moral issue.

    One thing that I wonder about, and it is something you have mentioned in your article, is that is it possible for there to be no accidents caused by autonomous vehicles. Could the technology be so advanced that crashes are absolute rarities? Only time will tell but I hope these moral decisions prove to be ultra rare in the future.

    Also, your intro is something that I think about a lot. Philosophers of the past really were just thinkers. Business leaders today need to really think about the benefit/harm they are causing through their work. I think tech is especially guilty of not doing this. See no further the mantra “move fast and break things”.

%d bloggers like this: