• Auteur/autrice de la publication :
  • Commentaires de la publication :0 commentaire
  • Temps de lecture :8 min de lecture
  • Post category:Digital divide
You are currently viewing The AI Act on recruitment : navigating compliance

Nowadays, AI systems are involved in major recruitment proceedings such as drafting job descriptions, screening resumes, and even conducting interviews. Some of those stages are deemed as significantly impactful on the data subject’s rights.

In that context, the European Union introduced a new regulatory framework for the use of Artificial Intelligence (AI) in various sectors, including recruitment. The goal is to ensure AI systems operate ethically, respect human rights, and avoid discrimination.

This Act classifies processing activities according to the risk they present on data subjects. Indeed, Recruitment is considered a high-risk domain under the Act, and organizations using AI in this field must adhere to strict guidelines.

Risk Level Examples of Systems Proposed Measures Associated Penalty
Unacceptable Risk Social scoring, widespread biometric identification Total ban Up to 7% of global turnover or €35M
High Risk Recruitment, credit scoring, workforce management CE marking, declaration of conformity, continuous monitoring Up to 3% of global turnover or €15M
Low Risk Chatbots, artistic deepfakes Obligation of information and transparency Up to 1% of global turnover or €7.5M
Minimal Risk Anti-spam filters, AI in video games Voluntary application of codes of conduct Lighter penalties depending on the severity of the violation

 

Key Requirements for Recruitment under the AI Act

  • Human Involvement in Decision-Making

As we’ve mentioned recruitment procedures usually produce major legal effects on individuals. Considering those implications, the implemented AI systems should retain that human oversight as it remains a fundamental part of the decision-making process.

  • Transparency and Candidate Rights

Candidates must be informed if an AI system is being used during the recruitment process. In fact, they should be rightfully informed about the proceedings, and they also have the right to request human intervention if they believe a decision was made solely based on automated processes.

This obligation isn’t theoretical as the human intervention should necessarily be effective. The designated person should reevaluate the IA’s results; it’s not a simple formality and the validation shouldn’t be automatic.

Additionally, in line with Annex XII of Article 53, AI model providers must supply essential transparency information to downstream users, such as the model’s tasks, acceptable use policies, distribution methods, and interaction with other systems. This includes details on the model’s architecture, input/output formats, and the data used for training, enabling users to integrate these models responsibly into their systems.

  • Risk Evaluation

Before its deployment, organizations must conduct a risk assessment. This involves evaluating the potential biases the system may perpetuate, ensuring it does not replicate historical patterns of discrimination based on gender, age, ethnicity, disability, or sexual orientation.

  • Data Impact Assessment (DPIA)

Recruitment proceedings normally involve some sensitive personal data which implies more responsibilities on the data processor and controller. As such, companies must perform a Data Protection Impact Assessment (DPIA) to determine how data is collected, processed, and protected.

  • Declaration of Conformity

High-risk AI systems must have a declaration of conformity, be registered in the EU database, and display the CE marking indicating compliance​.

  • Compliance with Annex IV

For high-risk systems, compliance includes adhering to rules related to data governance, robustness, and accuracy​.

On these regards, to comply with the AI Act in recruitment, companies must follow a structured approach:

  1. AI System Documentation: Providers of AI recruitment solutions must create detailed technical documentation. This includes specifying how the AI functions, its accuracy, and its cybersecurity measures. Repeated audits are recommended to ensure ongoing compliance​.
  2. Training and Testing: Recruiters using these systems must follow strict usage guidelines. This involves ensuring that the input data (resumes, application forms…) is diverse and representative to avoid biases. Additionally, AI systems should undergo testing to detect any discriminatory patterns or potential flaws before they are used​.

 

The AI Act is a significant step toward creating an ethical and fair landscape for AI deployment in recruitment. While it sets rigorous standards, it also provides opportunities for companies to innovate responsibly, provided they adhere to these new regulations.

 

Sources

https://bigmedia.bpifrance.fr/nos-actualites/ia-act-comment-se-conformer-a-la-nouvelle-loi-europeenne-sur-lia

https://travail-emploi.gouv.fr/actualites/l-actualite-du-ministere/article/le-deploiement-de-l-ia-dans-les-organisations-et-son-utilisation-dans-les

https://www.cio-online.com/actualites/lire-ia-et-recrutement-de-nombreux-risques-juridiques-15801.html#:~:text=Anticiper%20la%20mise%20en%20conformit%C3%A9%20avec%20l’AI%20Act&text=Ce%20r%C3%A8glement%20impactera%20tant%20les,cruciaux%20tels%20que%20l’emploi.

https://www.decideurs-magazine.com/tendances/57603-ia-et-emploi-une-tentative-de-mise-a-jour-avec-l-ia-act.html

https://espaces-numeriques.org/wp-content/uploads/2024/02/Guide-Conformite-AI-Act-Fev-2024-1.pdf

https://www.wavestone.com/fr/insight/lai-act-les-cles-pour-comprendre-et-appliquer-la-loi-europeenne-sur-lintelligence-articificielle/

A propos de Carla Majed

Laisser un commentaire

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.