Artificial intelligence (AI) is increasingly used in recruitment, but its deployment in the sector has long been controversial, with many claiming it can reinforce stereotypes and exclude good candidates. However, a new study from the London School of Economics (LSE) has found AI shows less bias when it comes to recruitment and is more efficient than humans on many tasks.
Despite this, the LSE team also found that people are unwilling to use AI and that its involvement in the hiring process often puts prospective job seekers off applying to a company. This may be down to a lack of the human element in decision-making, which allows for subjectivity.
For their study, People versus machines: introducing the HIRE framework, published in Artificial Intelligence Review, lead author Paris Will and colleagues reviewed previous research into the effectiveness of AI in the hiring process, as well as exploring the ‘fill-rate’ for open positions, and how likely a recommended candidate is to go on to be hired after a human-led process.
Taking a practical approach to AI vs human
Will, the lead corporate research advisor at The Inclusion Initiative, a LSE research centre looking at how to build a truly inclusive workplace, told Tech Monitor she and her team found that AI job hiring is equal to or better than human hiring.
The “key message of our study is that there is a lot of perceptions towards AI and a lot of them are really negative, as well as controversy around whether we should use these systems or not,” she says. “Our argument is that when adopting AI, we should compare human vs AI outcomes without a bias, and if AI is better then it should it be adopted. Don’t think theoretically, it should be tested.”
The main current use of AI in recruitment is to sort through CVs. The LSE study found that while AI had limited abilities in predicting employee outcomes after being hired, it was a “substantial improvement over human predictions” and AI-infused hiring resulted in a more diverse workforce than a purely human process.
Critics of automated recruitment say candidates who don't accurately fit into pre-defined categories can be wrongly excluded by AI systems. A study last year from the Harvard Business Review identified millions of "hidden workers" who were having their CVs wrongly discarded by digital systems.
The biggest factor going against greater use of AI is its lack of emotional intelligence, says Roheel Ahmad, managing partner at executive recruiter Forsyth Barnes. "As advanced as AI can be, the human element allows subjectivity for individual cases," he says. "Career choices are very personal, and the opportunities we present to people are life changing. It takes a real understanding of another person and being flexible on an individual’s circumstances to know what’s best for them or where their skills are best utilised."
This seemed to be reflected in the reaction from both candidates and recruiters seen by the LSE team, who found that “people trust AI hiring less than human hiring because they have privacy concerns, they find AI less personable, and they view organisations deploying AI hiring less attractive than those hiring through humans.“
Ahmad adds: "People buy from people. No one wants their fate decided by a machine." He adds that career decisions do "not always come down to what’s best in terms of their skills. A good recruiter will look well beyond this, and aspects such as salary requirements, commission and bonus structure.
"Instead they'll ask questions such as are they starting a family, and therefore want flexibility in their office days? Has their progression historically been slow, and therefore would they consider a lower base salary if there’s guaranteed fast-track career growth? This is something you learn through getting to know a human being and something AI cannot currently account for."
AI can add value to recruitment and reduce bias
Much of the lack of trust in AI comes from the way it is portrayed by the media, argues Dr Dario Krpan, assistant professor in behavioural science at LSE. "Our analysis, however, shows that even if AI is not perfect, it is fairer and more effective than human recruiters," he says. "Rather than focusing on AI in isolation, it is important to compare it to the alternative hiring practices to understand the value it brings to the recruitment process."
In fact, Dr Grace Lordan, associate professor and founding director of The Inclusion Initiative, says AI can help tackle cronyism and bias. "It is time that humans hand over the hiring process to machines who do not have these tendencies," she says.
Dr Lordan adds that bias in algorithms is easier to mitigate for than biases seen in humans, adding that compliance specialists can be employed to monitor the AI and abate any concerns on fairness. "Let’s progress AI in recruitment and workplace inclusivity at the same time," she says.
According to a study by Gartner in 2019, as many as 37% of businesses had made use of AI in the recruitment process and 34% of recruiters believe AI will be extremely important in shaping the future of hiring practices, as reported by the LSE team in their study.
Ahmad agrees with this point, saying: "There will always be room for improvement, as recruitment processes will always need to be aligned to the latest advancements in technology to really enhance the human element – but never replace it."