The UK’s Information Commissioner’s Office (ICO) has published 296 recommendations for AI recruitment software providers after conducting a wide-ranging audit of significant players in the sector. While the ICO said it recognised the benefits of AI-powered platforms for sourcing, screening and selecting new job applicants, it said that shortfalls persisted in their commitment to data and privacy protections.
“AI can bring real benefits to the hiring process, but it also introduces new risks that may cause harm to jobseekers if it is not used lawfully and fairly,” said the ICO’s director of assurance, Ian Hulme. “Organisations considering buying AI tools to help with their recruitment process must ask key data protection questions to providers and seek clear assurances of their compliance with the law.”
AI recruitment tools increasingly popular
The market for AI-powered recruitment tools has flourished in recent years, with businesses enthusiastically embracing platforms capable of screening CVs for the ‘ideal’ candidate, assessing applicant’s skills in behavioural or psychometric assessments, or even evaluating their emotional state during interviews. The ICO’s audit of AI recruitment tools analysed all but the last of these categories. Conducted between August 2023 and May 2024, it assessed privacy management frameworks, levels of data minimisation, third-party relationships, IT security and transparency, and other factors, among providers in the sector.
The data watchdog noted in its subsequent report that there were several encouraging practices among certain AI recruitment platforms: many providers, for example, proactively monitored the accuracy and bias levels in their products and took action to improve them where necessary. However, “features in some tools could lead to discrimination by having a search functionality that allowed recruiters to filter out characteristics with certain protected characteristics,” while others inferred ethnicity, gender and other characteristics from candidates’ names without even asking them – data that was “often processed without a lawful basis and without the candidate’s knowledge.”
The ICO was also alarmed to discover that certain tools were collecting what it deemed as excessive amounts of personal information, with candidates and even recruiters rarely made aware that this was happening. “In some cases, personal information was scraped and combined with other information from millions of peoples’ profiles on job networking sites and social media,” said the watchdog. “This was then used to build databases that recruiters could use to market their vacancies to potential candidates.”
Positive rate of engagement with ICO recommendations in market
In its recommendations to AI providers and recruiters, the ICO urged both to process personal information gathered by AI-powered recruitment tools “fairly” by monitoring for issues pertinent to a platform’s accuracy, bias and potential to discriminate. The inner workings of such tools should also be explainable, said the regulator, as well as use the minimum amount of personal information necessary to function. AI providers should also clearly be defined as either the controller, joint controller or processor of personal information and prove capable of defining the lawful basis by which they are processing that data.
In total, the ICO sent 296 recommendations and 46 advisory notes to AI providers and recruiters. Both groups seem to have positively engaged with the regulator’s guidelines, with 97% fully accepting the watchdog’s recommendations and making changes to their use of such tools as a result.
“Our intervention has led to positive changes by the providers of these AI tools to ensure they are respecting people’s information rights,” said Hulme. “Our report signals our expectations for the use of AI in recruitment, and we’re calling on other developers and providers to also action our recommendations as a priority. That’s so they can innovate responsibly while building trust in their tools from both recruiters and jobseekers.”