Artificial intelligence is one of the most fascinating technologies that is gradually finding its way into all areas of our society. One of the best expressions of artificial intelligence is algorithm. 

Algorithms are defined as a set of “mathematical instructions or rules that, especially when given to a computer, will help to calculate an answer to a problem” Cambridge dictionary, 2020. Thus, Algorithms are able to solve problems but also to make decisions without human intervention. They reach a conclusion by learning by themselves, we call that machine learning. 

For a few years now, algorithms have been used in different areas of our society and were even used in sensitive domains such as justice, police and health. Their place in our society is no longer in doubt. This is due to the non-negligible advantages that they can offer. The potential of algorithms can be summed up in three words: cheaper, faster, and better. 

Nevertheless, some recent events have cast doubt on the fairness and the emergence of certain biases present in automated decision-making. In 2018, Amazon used an algorithm for hiring, but it results that the algorithm only managed to hire men and blatant discrimination was made against women. In 2016, the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm used in the field of justice for crime prediction, was criticised as being biased against African-American people. Within the literature, many comments and criticisms have emerged with regard to algorithms and the unfair decisions that they sometimes may reach. Facing these issues, some have promoted the view that humans are better at making decisions than algorithms.

But in most unfair decisions made either by humans or algorithms, there is a common denominator at the base of the unfairness: the biases that drive the decision. The one able to overcome these biases is the one who will be able to make the fairest decisions. It is on this last point that algorithms can make the difference and be fairer than humans in decision-making. Indeed, opposite to humans who cannot easily detach themselves from biases, algorithms might be able to avoid bias provided their parameters are correctly adjusted.

Pixabay, 2022, https://pixabay.com/fr/vectors/automation-robot-humain-6762812/

 

Human biases are components that cannot be changed contrary to the biases of algorithms.

As Daniel Kahneman said, individuals do not always make rational decisions and that for two reasons. First of all, humans build their social egos by integrating norms and values from their environment. These elements build their personality and influence their way of thinking but also to make decisions. In decision-making, humans will think according to the environment in which they have been shaped. A lot of psychology research has shown that the environment where they have been shaped, influences them and is full of bias. But the real problem is, that humans are unable to detach themselves from their environment. This is why, when they make a decision, the bias present in their environment will be integrated into their decision. Moreover, Daniel Kahneman emphasized that the brain is divided into two parts: system 1 linked to emotional thinking and system 2 linked to rational and logical thinking. The second system is the one that leads to decisions. While the first system has evolved quite well, the second part is more recent and not well developed. It is why, often the decisions taken by humans are buggy and therefore “illogical, inconsistent and suboptimal”. Humans use irrelevant information and can be influenced by various factors that they cannot be trained to make it better. It is notably in this aspect that the algorithms can be better than humans. When some biases are detected, the parameters of the algorithms can be modified and adapted in order to be more accurate in their decisions, whereas it is difficult for humans to erase their biases. 

 

Better use of data in algorithms can make automated decision-making better than that of humans.

Algorithm decisions can be fairer than those made by humans if certain parameters are changed within their functioning. One of the first parameters that plays on the fairness of the decisions is the use of data. Datas are the core to make an algorithm work, the algorithms learn and make decisions from data. Thus, if the data is fair there is a high likelihood that the decision taken would be fair too, whereas if the data is based on biases, there is a high likelihood that the decision taken would be unfair. An unfair decision due to data happens under two circumstances ; when data is influenced by the bias present in our real world, we call that the “unequal ground truth” or “dirty data” when there is a lack of data (often due to an under-representation of minorities). Hence, the necessity to overcome these imperfections by using data as fair as possible and finding mechanisms that avoid bias in data. In order to achieve a fair use of data, different solutions have been developed that give confidence in the ability of the algorithms to reach fair decisions. 

One of them is the idea of Leskovec to get away from the unequal ground truth. He proposed to train algorithms on carefully selected data which were previously decisions that we consider to be fair. For example, in automated decision-making in the justice system, if an algorithm learns from a judge’s decisions considered to be fair, the algorithm will probably reproduce the same reasoning and therefore reach a fair decision. 

Another way to avoid bias is to set restrictions on the data included in the algorithm. Some researchers have done so by excluding certain categories of data such as race or gender in automated decision-making. This setting was able to avoid unfairly and discriminatory decisions based on sex and/or race. If such a parameterisation, had been considered by Amazon in 2018 in its algorithm to hire, discrimination against women would have never happened. 

The last idea to avoid bias is to apply a prior test set up by the developers which consists of testing the algorithm before using it on the public. If the algorithm reaches a correct result it can be used if the algorithm reaches an incorrect result it must be reviewed.

 

Better transparency can make autonomous algorithmic decisions better than human ones.

To be able to improve a decision, it is necessary to understand the reasoning behind it. We can clearly admit that it is difficult to understand how a person came to a decision because its reasoning is often irrational, not possible to be explained and therefore to be improved. Whereas automated decisions, follow rational reasoning based on a set of codes. But, to be improved and fairer the algorithm needs to be transparent. Why does the algorithm need to be transparent? Because transparency would encourage the developers to obtain fairer results by verifying and refining decisions instead of just mechanically applying the automated decision-making based on the idea “because the algorithm gives this decision, the decision is fair”.

There are two types of transparency in algorithms: process transparency (transparency about the internal state) and outcome transparency (transparency about the decisions themselves and the patterns of decisions). The first approach of transparency is difficult to get for complex algorithms but, the transparency of outcome is possible and allows humans to understand the pattern of the decision. Thus, because developers are able to understand the pattern of the algorithm, they will be able to understand why the decision might be unfair, and which they can improve afterwards. Another interesting level of transparency was raised by Anupam Chander who adds that transparency in the input and output of the algorithm is also relevant. By making input and output transparent, disparate impacts can be better verified and resolved to arrive at fair decisions.

 

So in the end who won the battle?

We can affirm the idea that algorithms are capable of making fair decisions but only if they are used correctly. Indeed, if we just apply the automated decision without pursuing meticulous work behind it, then it is clear that algorithms will be as imperfect as humans and there would be no point in bequeathing to them the responsibility of making decisions. But as we have seen, if changes are made to the current approach of algorithms through an improvement of data and an increasing transparency, algorithms can most of the time reverse the trend and become fairer in decision making than humans. Nevertheless, such requirements appear to be the greatest challenge that artificial intelligence needs to overcome. It requires a lot of human and financial investment, but in the long run it might be possible. As Victor Hugo said “Being good is quite easy, what is difficult is being just”.

 

Cailin Van der zijden

Sources;

Kahneman, Daniel, and Amos Tversky “Prospect Theory: An Analysis of Decision under Risk.” Econometrica, vol. 47, no. 2, 1979, pp. 263–291. JSTOR, www.jstor.org/stable/1914185 accessed 19 October 2020.

Eirick Prairat “De la Déontologie”, Chapter 2, 2009,  p.22-33

“Judgement under Uncertainty Heuristics and Biases” p.33, Daniel Kahneman, Paul Slovic, Amos Tversky, april 1982

Karthik Kannan “How can we make sure that algorithm are fair” HD Reporter (2019) <http://hdreporter.com/technology/6983-how-can-we-make-sure-that-algorithms-are-fair> accessed 19 october 2020

Philipe Hacker “Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against Algorithmic Discrimination under EU Law” Common Market Law Review (2018) p.12

Tom Abate and Marina Krakovsky “ Which is more fair: a human or a machine” Stanford engineering, (2018)>


Victor Demiaux, Yacine Si Abdallah “How can humans keep the upper hand” Commission Nationale Informatique et Libertés” (2017) p.52

 Lee Rainie, Janna Anderson “Code-Dependent : Pros and Cons of the algorithm age”, Pew Research Center Internet and Technology (2017)

William Seymour “Detecting Bias : Does an algorithm have to be transparent in order to be fair?” University of Oxford (2018), p.2

Anupam Chander “The Racist Algorithm”, 115 Mich. L. Rev. 1023 (2017) p.3.