You are currently viewing How to respond to biases resulting from the use of AI in recruitment processes?
Image credit: Adobe Stock

The biases that result from artificial intelligence algorithms are one of society’s main concerns. The most serious consequences of these algorithmic biases occur particularly in human resources during the hiring process. Thus, both designers and institutions are trying to limit the biases and especially their repercussions as much as possible.

Biases are not only algorithmic and are in fact very common: the human brain naturally makes shortcuts from the information it has at its disposal. Not all biases are negative and do not always lead to discrimination. However, this is rare and it is not easy to train a totally unbiased artificial intelligence.

A loss of opportunity as consequence

The consequences of algorithmic bias include social harm, economic loss, and loss of freedom. Nevertheless, the loss of opportunities is probably the most serious. This corresponds to the discrimination of a certain category of the population by preventing them from accessing education or a job[1].

Many companies are now using AI software in their recruitment process. These systems will reproduce the patterns they have been taught to filter and select the most relevant resumes for a given job. Inevitably, AI will also reproduce the biases that were instilled (albeit unintentionally) by humans during the algorithm’s machine learning phase[2].

Another phenomenon created by the use of AI in recruitment is the “hidden workers”. They are job seekers who are automatically rejected by the algorithms because of certain aspects of their profile. In reality, the AI systems used are designed to maximize the efficiency of recruitment processes and minimize the number of candidates for a job[3]. Therefore, candidates who do not have certain diplomas or experiences will not be retained by the AI. About that, a Harvard Business School study, published in September 2021, estimated that these “hidden workers” would be about 27 million in the United States.

What solutions?

In response to this reality, the New York City Council passed a bill on November 11, 2021, to control the biases of artificial intelligences used during the hiring process. One of the main provisions of this bill is a requirement for the employer to notify applicants of the use of AI in the recruitment process. The information used by the system to assess a candidate’s suitability for a job must also be revealed. In addition, the providers of the AI systems will have to audit the biases of their systems and report the results to the user companies[4].

At the European level, this issue is also considered by the European Commission’s proposal for a regulation on AI[5]. This project has established a classification in 4 levels according to the uses of an AI:

  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal risk

Classified as high risk, the use of AI in recruitment will be subject to higher security criteria. Therefore, an evaluation of the systems will be required, as well as their registration in a database. System monitoring will also have to be put in place by the providers, and the authorities will be responsible for monitoring the market.

Although market players demonstrate a clear awareness on the risks of AI, these proposals are only at the draft stage and remain deliberately vague pending their application. It is therefore necessary to be attentive to the future evolution of these bills.

Par Alexia Nay





[5] Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts (COM/2021/206 final).

A propos de Alexia Nay