Discrimination laws must be adapted to consider the impact artificial intelligence algorithms have on certain groups, new research has found. The paper from the Oxford Internet Institute says that AI systems are exhibiting bias against groups not protected under current legislation, and that governments should consider updating laws to reflect this.
In the study, published today in the journal ‘Tulane Law Review‘, author Professor Sandra Wachter of the Oxford Internet Institute argues that something as simple as the web browser you use, how fast you type or whether you sweat during an interview can lead to AI making a negative decision about you.
She says current discrimination laws don’t adequately combat the type of bias exhibited by artificial intelligence, because there are specific categories of people that receive unfair outcomes, including over loan decisions, job applications and funding requests, who fall outside of the “protected groups” covered by discrimination legislation.
How does AI discriminate against certain groups?
Discrimination linked to AI can happen in even ordinary situations without the individual even knowing an AI made the final call, says Professor Wachter in her paper. Indeed, some of the decisions are made on criteria a human wouldn’t consider relevant.
She uses the example of applying for a loan where an automated system might reject an applicant if they scroll too quickly through the application pages because it might consider they have not read the documentation properly. Professor Wachter says these new forms of discrimination don’t fit neatly into what is traditionally considered a norm under anti-discrimination legislation, saying that AI challenges assumptions and identifies individuals based on criteria not protected by law.
Race, gender, sexual orientation could be replaced by groups including dog owners, video gamers and even Safari browser users in the context of AI decision making, she explains, particularly when it makes decisions over hiring, loans and insurance.
“Increasingly decisions being made by AI programmes can prevent equal and fair access to basic goods and services such as education, healthcare, housing, or employment,” she says. “AI systems are now widely used to profile people and make key decisions that impact their lives. Traditional norms and ideas of defining discrimination in law are no longer fit for purpose in the case of AI and I am calling for changes to bring AI within the scope of the law.”
There is an “urgent need to amend current laws to protect the public from this emergent discrimination through the increased use of AI,” the research warns.
This is something supported by Mary Towers, policy officer and AI specialist for the Trades Union Congress (TUC), who has been exploring the impact of AI on the workplace.
She told Tech Monitor that there are a number of ways to address the issue and was calling on the government to update existing anti-discrimination legislation. She suggested it could also be achieved through better data protection legislation and guidance for automated decision making.
One area the TUC wants to see urgent action is a guarantee of “universal and comprehensive rights of human review of high-risk automated decision making, which includes all decisions made about human workers before any decision about them is finalised.”
Discrimination laws and 'artificial immutability'
Legislation may have to change, as eliminating bias from AI systems is impossible, says Vijai Shankar, VP of product and growth marketing at AI software vendor Uniphore. Firstly, he says, "information itself requires an observer or interpreter, but this inherently introduces bias. Secondly, machine learning uses datasets often derived from human observation and activity. These datasets include all the biases inherent in those human activities and therefore transfer the bias into the system.”
He said it is important to recognise that bias exists in the real world across all industries, the question isn’t how to make it go away but how to detect and mitigate for such bias.
“Bias is mitigated by using diverse data for training and evaluation including real people from different racial and ethnic backgrounds, genders, regions of origin, accents, and ages," he adds. "This helps create models that focus on what we are truly trying to model and reduce the impact of undesired model bias."
In the research Professor Wachter coins the term 'artificial immutability', suggesting AI contributes towards discrimination in five ways: vagueness, instability, involuntariness, invisibility and a lack of openness. She says that “reconceptualising the law’s envisioned harms is required to assess whether new algorithmic groups offer a normatively and ethically acceptable basis for important decisions.”
She adds: “To do so, greater emphasis needs to be placed on whether people have control over decision criteria and whether they are able to achieve important goals and steer their path in life.”