You are currently viewing The ethics behind autonomous vehicles
Image taken from pexels.com

In the future (for now somewhat distant), we will get into our car, and it will automatically drive us to our home or office. On the way, we can do anything, such as read a book, watch a movie, or take a nap. The car will do it all by itself. Now imagine you are faced with an imminent accident and the vehicule(or even you) must make a decision. In front of the car are 2 motorcyclists, one wearing a helmet and the other not, the question that arises is who to hit?

The helmet problem 

The above dilemma was first posed by Noah Goodall (2014). It involves 2 moral issues, on the one hand, the minimization of value, i.e. hitting the motorcyclist with the helmet. In theory the latter is more likely to survive. On the other hand, the fact of assessing the problem from a responsibility point of view and hitting the one without a helmet, as it was his obligation to wear it.

As you can see, we are still a long way from this future, but the ethical issues are now coming to the fore. Justifiably so since self-driving vehicles will be programmed to perform a certain action when the opportunity arises. It is therefore necessary to know how to act in such a situation.

Are we ready for this type of mobility?

A study conducted by the Automobile Association of America (2016) revealed that 3 out of 4 americans are afraid of travelling in a self-driving vehicle. In addition, there is little or no legislation in place. In the event of an imminent crash, who should the vehicle hit? The alcolised beggar instead of the doctor? The athlete instead of the obese person?

As can be seen, numerous questions emerge as to whether someone should be chosen depending on their physical appearance, economic or social status, age, etc. Faced with this dilemma, the German government published a code of ethics in 2016 that regulates these kinds of actions. It prohibits people from being discriminated against based on their physical appearance or economic position.

However, an experiment published by Nature showed that humans tend to favor some people over others. This study is based on the results of an online game that can be accessed by anyone. The results show that people prefer to save children over the elderly, athletes over people with obesity, and a doctor over a beggar. So, the question remains, should we make a choice and consequently program the vehicle to execute it or should we follow the example of the German code?

 The owner involved

These ethical dilemmas become even more complex when the car owner is involved. The following approach is presented by Yuval Noah Harari in his book Homo Deus; Imagine you are faced with an imminent collision; a child is crossing the road and did not notice the oncoming vehicle. There are only 2 ways out, hit the small pedestrian and save owner’s life or turn sharply to save the young man’s life even if it compromises the life of the passenger, what would the algorithm have to do in this case? As with the previous questions, there is no single answer.

Now imagine that the manufacturers of these cars decide to delegate this responsibility to their customers, so they create 2 models. One of them privileges the life of the owner and the occupants of the vehicle and will do everything possible to save their lives. The other model prioritises the preservation of the greatest number of lives, even if it means sacrificing the occupants. Which model would you buy?

As you could see, when it comes to the ethics of autonomous vehicles, there are more questions than answers. We have not yet reached the point where we have to make such choices when programming, as there are no level 5 vehicles capable of fully autonomous driving so far. Cars like the Tesla with independence capabilities of a level 3 require the presence and continuous supervision of a human. However, in the event of an accident, it is not yet clear who would be at fault, the machine, or the human.