Sep 19, 2016

The Moral Philosophy of Self-Driving Cars

Since self-driving cars started to become close to be available to consumers, issues about the management of accidents by the car internal computer are being debated, mostly on the ethical ground. The discussions typically take the form of thought experiments in which people are confronted with ambiguous situations that challenge their rationality and moral sentiments: e.g., if you had no choice, would you rather kill innocent guy A or innocent guy B, save a group of people or save youself etc.

Here some interesting reads and a couple of videos:

And there is also the code access issue, which I am not dealing with here, but is worth mentioning:

All the examples tackle our individual morality and the related decisions are usually hard to take. Sometimes rationality and ethics point in opposite directions. Moreover, the instinctive fast reaction when we actally find ourselves in a difficult situation may not coincide with the decision we would take if we had enough time to evaluate the situation.
I find all these thought experiments extremely interesting, but I am not sure that this is the right approach to the issue.
I believe that the discussion should start from a collective perspective rather than from taking into account individual issues. We should ask ourselves if self-driving cars represent an improvement for our society, if they allow us to live better lives. The answer is probably yes. Self-driving cars bring more advantages than disadvantages to people's lives and to society. Traffic would be more fluid, there would be much less deaths on the streets, the use of cars would be more efficient, oil consumption would drop and we would gain more free time. While there are some risks and costs, the benefits appear to be much larger. The cost-benefit analysis seems to justify the adoption of self-driving cars. So from the perspective of society the first impression is that there is really not much of an issue: self-driving cars are very likely to be a good thing.
Only once the issue is settled from the society perspective, it makes sense to think about individual behaviour.
The issue is not how we should program the computers in order for them to reproduce our morality. This approach is based on the implicit assumption that self-driving cars must take decisions according to human morality; that morally behaving computers would produce the best possible outcome for society and that would in turn enhance the adoption of self-driving cars. But this assumption has yet to be proven right and it is probably wrong. So wrong that perhaps giving self-driving cars a human-like decision process would simply make their adoption unsuccessful.
In fact, the individual issue to be considered is not how we want self-driving cars to take decisions, but rather what decisions should self-driving cars take in order for them to be preferred to human-driven cars by the individual user. If self-driving cars reproduce the human morality, then there is no advantage in finding yourself in one of them if you happen to be in a risky situation. On the contrary, people might even feel safer in human-driven cars if self-driving cars were programmed to minimize deaths in case of accidents since that would often guarantee that the car is programmed to kill its passengers. I would not want to use a self-driving car in that case, even if I know rationally and morally feel that killing the passengers can be the optimal decision sometimes. This is exactly the problem. If giving self-driving cars a moral decision process negatively affects their adoption, than that is the wrong decision process since non-adopting self-driving cars is harmful to society. In fact the true rational and moral action is not giving morality to self-driving cars, but rather programming them in a way that promotes their adoption; and this may require a counterintuitive solution such as giving cars a decision process that does not correspond to our immediate morality.
Suppose for instance that self-driving cars become a reality. There are 2 producers of self-driving cars. One employs a computer that always takes the morally optimal decision even if this implies sacrificing the passengers. The other producer simply programs its cars to always try to protect passengers' lives no matter what. Which one would you buy? In which car you want to be when the accident happens?
I think I know the answer.
But we may feel bad for our selfishness and decide that the governments should enforce producers to give cars a morally consistent decision process. The second producer, which was selling more, is now forced to reprogram its cars.
Now you can either buy a moral self-driving car or a normal human-driven car. In which one you want to be if you find yourself in an accident in which the probability of death are 100% with the self-driving car (e.g. the computer always finds optimal to kill its 2 passengers rather than 4 close pedestrians) and less-than-100% with the human-driving car (your instinct is probably to save yourself)?
Again I think I know the answer. You selfish!
Even if the overall likelihood of death is lower with the moral self-driving car, there are cases in which the probability of death becomes very high. These are perhaps unlikely cases. Will you take the risk? Low probability of certain death vs higher probability of non-mortal accident. Not easily solved.
But I perhaps would behave as a free rider and let other people use self-driving car, which makes traffic safer also for me, and drive my car by myself so to avoid being purposedly killed by a machine.
The problem is that I may not be the only free rider, which at the end would impede the adoption of socially safer self-driving cars.
In sum it seems to me that the optimal (and socially moral) configuration of self-driving cars has nothing to do with morality at the individual level, but with adoption incentives. If we are mostly interested in our lives first, which seems reasonable, then self-driving cars should first protect their passengers and only then try to minimize death losses. This configuration would mostly certainly enhance their adoption, in turn making roads safer for anyone and leading to economic improvements.
Would this happen? Probably not, because this is not how people and policy makers think. Self-driving cars will soon become available and as soon as the first deaths will happen goverments will step in. I can not make a forecast, but one scenario might be the following. Probably cars will be given some kind of moral-like decision making. Some passengers will die. There will be a lot of debate. The adoption of self-driving cars will not be as widespread as it could be. But self-driving cars are still safer than human-driven cars, so the government and/or insurance companies will try to incentivize them by increasing the cost of using human-driven cars (with taxes, subsidies etc.). At the end self-driving cars will become the norm and people would become accustomed to accept a small risk of certain death. All in all self-driving cars will be eventually adopted anyway but the process will be slower than it could be.
At this point the question is: since in the long run we will use self-driving cars anyway, is it socially better for them to be moral or not? Again the answer is not easily given. I would probably say that the optimal configuration is to minimize death losses, but this is not always in line with our morality and if self-driving cars are very efficient, then the difference might be negligible as long as their adoption is widespread.

UPDATE 03/10/2016
I found a MIT website in which they let people judge several morally difficult situations. After you complete the test, you get results compared to other people who took the test.
The above reasonig fully applies.

Here's the link:

Popular Posts