AI is an umbrella term for a wide array of models, methods, and prescriptions used to simulate human intelligence, often when it comes to collecting, processing, and acting on data. It includes machine learning, speech and image recognition, and natural language processing.
Employers, and the recruitment agencies who work for them, often use AI software systems which deploy algorithmic-decision making to help them to sift large numbers of recruitment applications and CVs, and assist with other hiring processes such as job advertisement placement, and video interviewing. The algorithms used in these systems are usually machine learning. At a high-level, this means that the software analyses the data and recognises patterns in the data which can be used for predictive analytics or modelling. In doing so, the machine learning algorithm learns and optimises its performance over time, based on the data fed into it.
Whilst AI-powered systems are typically implemented by human resources (HR) teams in order to save costs of and time taken by the recruitment process, concerns have been raised that long-standing recruitment biases may actually be replicated and reinforced by AI. Perhaps the best known example involves Amazon[1], the e-commerce specialist, which was reported in 2018 to have ditched its CV screening tool when it was shown to strongly favour male over female job applicants because the historical data on which it was trained was biased, as Amazon had in the past hired significantly more male than female software engineers.
Several academic studies show that AI bias is widespread. A 2019 study[2] by Harvard Business Review, Northeastern University and USC found that broadly targeted Facebook advertisements for supermarket cashier positions were shown to an audience of 85% women, while jobs with taxi companies were shown to a 75% black audience. And a report[3] from University of Pennsylvania Carey Law School published in 2021 found that black professionals received 30% to 50% less job call-backs when their CVs contained information tied to their racial or ethnic identity.
In the UK, the anti-discrimination framework is principally set out in the Equality Act 2010. That Act enshrines the protection of individuals from discrimination whether generated by a human or automated decision-making system, and requires employers to make reasonable adjustments to allow disabled candidates to take part fairly in the recruitment process. In addition, the UK General Data Protection Regulation protects individual data subjects’ ‘fundamental rights and freedoms’ as a result of the processing of their personal data, including the right to non-discrimination, with an express obligation for data controllers to take measures to prevent ‘discriminatory effects on natural persons’.
The Information Commissioner’s Office (ICO), which has long been concerned that the use of machine learning algorithms can result in discrimination, recently announced that it plans to open an enquiry[4]
into automated HR systems used to screen job candidates, including looking at employers’ evaluation techniques and the AI software they use. Announcing the enquiry, as part of ICO25 (a three year plan setting out the ICO’s regulatory approach and priorities), John Edwards, the UK information commissioner, highlighted ‘the impact the use of AI in recruitment could be having on neuro-diverse people or ethnic minorities, who weren’t part of the testing for this software’. As well as investigating these concerns, the ICO will issue refreshed guidance aimed at developers of AI systems focussed on ensuring that algorithms treat people and their information fairly.
[1] Amazon scraps secret AI recruiting tool that showed bias against women | Reuters
[2] [1904.02095] Discrimination through optimization: How Facebook's ad delivery can lead to skewed outcomes (arxiv.org)
[3] The Elephant in AI | Women and Public Policy Program, Harvard Kennedy School
[4] UK Information Commissioner sets out focus on empowering people through information | ICO