When it comes to high-tech automotive technology, many people think that self-driving cars could be the best thing that happened to transportation since the invention of the internal combustion engine. The benefits are more than obvious – driverless cars could end traffic jams and could prevent hundreds of thousands of deaths due to driver error.
But, as more and more self-driving cars are starting to cruise the streets, engineers and drivers alike must solve an impossible moral dilemma: to kill or not to kill.
Ø The Trolley Problem
In philosophy, there’s a popular thought experiment called the “trolley problem”. It goes like this: if you had to push one large person in front of a moving trolley to save five other persons, would you do it?
Now picture the following scenario: you are in a self-driving car and, after turning a corner, you find that you are in a direct collision with a group of 10 pedestrians. The only other option is to drive into a wall and die. What should the car do?
This thought experiment has taken hold on driverless cars engineering and programming. How should a self-driving car react if it’s impossible to avoid everyone – should it kill the driver and other occupants for the greater good or should it protect its passengers?
This ethical dilemma becomes even more complicated when you add other factors into the equation. What if there are four passengers in the car, and two of them are children? Should the algorithmic morality change in this instance? How about if all the occupants are adults? Do they have a moral responsibility when riding in a driverless car and should sacrifice themselves?
The answers to these moral questions are extremely important because they can have a big impact on accepting driverless cars. If, for example, self-driving cars are programmed to kill its passengers in certain situations, dealerships wont want to sell them and many people would probably choose not to buy one.
Ø Do It for the Greater Good
So, should a driverless car be programmed to kill you to save others? Some would say that one should always do what will produce the greatest happiness to the greatest number of people while deontologists believe that life is sacred, and you should never kill. But, what does the public think?
A group of scientists led by Jean-Francois Bonnefon from the Toulouse School of Economics in France ran a number of experiments to discover the public’s opinion on this moral dilemma. According to technologyreview.com, they asked several hundred workers on Amazon’s Mechanical Turk to find out what they thought.
The participants were given different scenarios in which one or more pedestrian could be saved if the cars were programmed to sacrifice their passengers.
The results were very interesting: 75% of people thought that a self-driving car should always injure or kill the passenger, even to save just one pedestrian. But, here’s the catch: people are comfortable with this idea as long as then don’t have to drive one themselves.
Most people believe that it is more ethical to take the action that causes the maximum happiness to the maximum number of people. Based on this idea, a driverless car should take the action that saves the most number of lives. So, if there are four passengers in a car, but only one pedestrian, the program should choose to sacrifice the pedestrian and save the occupants.
Ø Algorithms Can’t Judge Human Intention
The self-driving car moral dilemma has certainly plagued the autonomous car industry. Engineers, scientists, philosophers and lawyers alike try to figure it out. But, the more they try, the more complicated it gets.
What if a pedestrian acted recklessly and jumped in front of the car? Maybe the pedestrian wasn’t paying attention or maybe he did it on purpose with the intention to make the car swerve and kill the passenger. It sounds like a Hollywood plot, but carmakers must eliminate all possibilities to make driverless cars as safe as possible.
At this point, artificial intelligence can’t judge human intentions. It is extremely difficult to create an algorithm that will make the right decision when confronted with such a complicated moral dilemma. Engineers must work with psychologists and philosophers to find the best possible solution to this problem.
It is very unlikely that a self-driving car would be confronted with a situation where there are only two courses of action, each leading to death. But, with more and more driverless cars on the road, it is crucial to examine these moral questions.
Driverless cars are an important step in the society’s journey toward safer streets. Statistics show that self-driving cars can reduce traffic deaths by up to 90%. However, the other 10% can’t be ignored, and is still up for debate.
What do you think is the moral thing to do? Should a driverless car sacrifice its passengers for the greater good or should it always protect its occupants?