BCS, the UK’s Chartered Institute for IT, has raised concerns about the consequences of removing the “human in the loop” from decisions made by artificial intelligence systems. As the UK government ponders making changes to data laws post-Brexit, BCS said today that human review of AI decisions needs legal protection, and that wider regulation of AI should be considered to safeguard individual rights while AI is still in its developing stages.

Article 22 changes
BCS, Britain’s chartered institute for IT, says human review of AI decisions in areas such as credit scoring must be maintained in any changes to British data laws. (Photo by Dean Mitchell/iStock)

The comments from BCS are a response to a consultation launched in September by the UK government looking at ways that UK data laws could be reformed. Under the title ‘Data: A new direction’, the document explores how current legislation can be modified to allow more innovation, remove barriers to data flows and reduce the burden on businesses. This includes the possibility of scrapping Article 22 of the GDPR – a clause that focuses on the right to review decisions taken by automated software programmes.

Getting rid of Article 22 would mean that the “human-in-the-loop” provision for algorithmic decision making in current data laws would go, and individuals would not have recourse to appeal decisions taken solely by artificial intelligence (AI) or other automated tools.

What is Article 22?

Although the principle of “human-in-the-loop” precedes GDPR and has existed in European data protection legislation since 1995, Article 22 of the GDPR provides additional rules that protect individuals who are legally affected by decisions taken solely by automated decision making such as AI, including profiling.

For example, an individual rejected for a credit card as a result of a decision taken through an AI system has the right to appeal under Article 22 and ask for a human to review the outcome of this decision.

But as Tech Monitor has recently reported, Article 22 is rarely applied by judges or data protection authorities. The first time was earlier this year, in a court case involving ride-hailing service Ola, when it was ruled that a solely automated decision was used to deduct wages from a driver’s earnings.

Speaking at the time, professor of law at Radboud University Nijmegen, Frederik Zuiderveen Borgesius, said that Article 22 is hardly applied because many AI-driven decisions about people fall outside the scope of the provision: “Article 22 only applies to narrow categories of decisions about people, namely decisions with ‘legal’ or similarly far-reaching effects. Hence, it’s difficult to see how abolishing that provision would seriously increase innovation.”

Dr Sam De Silva, chair of BCS’ Law Specialist Group and a partner at law firm CMS, said that Article 22 is not an easy provision to interpret and warned about the dangers of interpreting it in isolation.

“We still do need clarity on the rights someone has in the scenario where there is fully automated decision-making which could have a significant impact on that individual,” De Silva said. “We would also welcome clarity on whether Article 22(1) should be interpreted as a blanket prohibition of all automated data processing that fits the criteria or a more limited right to challenge a decision resulting from such processing.

“As the professional body for IT, BCS is not convinced that either retaining Article 22 in its current form or removing it achieves such clarity.”

De Silva also stressed the need to consider the need to keep the “human-in-the-loop” in fully automated decisions outside of the scope of personal data as some of these decisions could still have a life-changing impact on a person’s life. He put the hypothetical example of an algorithm created to decide whether someone should get a vaccine and which uses non-identifiable data such as DOB or ethnicity.

“Based on the input, the decision could be that you’re not eligible for a vaccine,” said De Silva. “But any protections in the GDPR would not apply as there is no personal data.

“So, if we think the protection is important enough it should not go into the GDPR. It begs the question – do we need to regulate AI generally – and not through the ‘back door’ via GDPR?”

Should CIOs care about changes to Article 22?

De Silva said that there are two different perspectives when looking at Article 22 changes for CIOs, depending on whether they work at companies that supply or develop AI solution or those in end-user organisations that procure such technology. For the latter, retaining the legislation in some form would be preferable as it means the interests of the customers of those organisations are protected, he says.

“Having said that, from an end-user, perspective, it will probably make the CIO’s job a bit more difficult  in the sense that they will need to ensure that from a business and operational perspective that the protection can be implemented when they are procuring and deploying AI solutions within their business and this may require more transparency from the AI solution providers about how the algorithms make the decisions,” De Silva told Tech Monitor.

However, De Silva added, if the Article 22 legislation is removed or altered significantly, decisions regarding keeping the human in the loop must be taken together by business and legal teams. “To a certain extent it depends on whether the CEO believes that this is an important protection for customers of the organisation to actually have,” De Silva said.