Amazon shutdown an artificial intelligence recruitment tool they were designing in-house, one they found showed inherent bias against female candidates.

Since 2014 a team had been building an AI tool that would review job applications and resumes with the goal of automating the recruitment process for the company.

Speaking to Reuters reporters one of the people involved in the project commented that: “Everyone wanted this holy grail, they literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

However, after just a year of running the AI recruitment system Amazon quickly realised that it was rating candidates for technical roles such as software developer or technicians in a sexist manner.

Amazon AI Recruitment Tool

The computer model was trained using resumes that had been submitted to amazon over a period of ten years. Most of these applications had been sent in by male candidates, a weighting that reflected the gender split within the tech industry.

Amazon AI

The AI recruitment tool erroneously interpreted this data to mean that males where the preferred candidate and that any application with a clear female connection should be downgraded.

This resulted in a situation where female candidates were been penalised for applications that contained wording such as ‘women’s chess club captain’. It also downgraded candidates who had graduated from all-female colleges, according to information disclosed to Reuters.

A spokesperson for Amazon has commented that: “This was never used by Amazon recruiters to evaluate candidates.” However they have not disputed that recommendations by the model were viewed by Amazon recruiters.

See Also: AI Bias “More Dangerous than Killer Robots”

This failed attempted at building an AI recruitment tool highlights just how important datasets are in training AI and machine learning models.

An enterprise should work to make sure what they are feeding into the model does not carry and inherent bias that the machine will then extrapolated causing an ineffective AI model to be created.

Biases, such as selection biases, interaction biases, or similarity biases, can lead to financial or legal difficulties when it comes to deploying AI on a large, professional scale.