Sign up for our newsletter
Policy / Privacy and data protection

UK wants to remove the human from the loop in algorithmic decision making

The UK's post-Brexit data plans include potentially reforming rules around AI decision making. But experts say the costs could outweigh the benefits.

The UK is considering whether to dilute or nullify the “human-in-the-loop” provision for algorithmic decision making in current data laws. This could mean high stakes decisions about individuals being taken solely by automated software programmes with no recourse to appeal. Proponents of the plan believe this would boost innovation in AI, but experts say the costs could outweigh the benefits. 

Culture secretary Oliver Dowden wants to shake up UK data rules. (Photo by Chris J Ratcliffe/Getty Images)

The UK government is planning an overhaul of the UK’s data regime post-Brexit with the stated purpose of creating “a world-leading data policy” that delivers a “Brexit dividend” following the country’s break from the EU. 

A 146-page consultation document entitled “Data: A new direction” outlines some of the ways the UK’s data laws could be reformed. It focuses heavily on removing barriers to innovation and data flows, and reducing burdens on businesses. As part of the drive to boost innovation in AI, it mulls whether to scrap Article 22.

This stipulates that people have the right to not be subject to a solely automated decision-making process with ‘significant’ or legal effects, and that they can request a human review of an algorithmic decision. Examples of such algorithmic decisions would include a gig-worker fired by an algorithm, or a bank using a fully automated decision to grant or deny a mortgage. 

White papers from our partners

The government consultation document acknowledges that there may be “a legitimate need for certain ‘high risk’ AI-derived decisions to require a human review”. However, it asserts there is currently a “lack of certainty on how and when current safeguards are intended to apply in practice”. 

It says that the current legislation around AI creates “a complex exercise for an organisation looking to develop or deploy AI tools, possibly impeding their uptake” and solicits evidence on whether Article 22 is currently effective or could be reformed.

UK algorithmic decision-making plan: a significant change

The principle of “human in the loop” has existed in European data protection law since the 1995 data protection directive, so precedes GDPR by more than two decades. As such, “it’s reasonable for the UK government to question and reassess the logic underlying that principle,” says Omer Tene, vice president and chief knowledge officer at the International Association of Privacy Professionals. 

“In an environment of ubiquitous AI and automated decision making, inserting a human into the loop is not necessarily the best or only safeguard for individual rights,” says Tene. “Realistically, when presented with an automated scoring, for example in a credit or insurance context, human reviewers are highly unlikely to substitute their discretion for that of a machine, as they could face potential liability or discipline.”

In an environment of ubiquitous AI and automated decision making, inserting a human into the loop is not necessarily the best or only safeguard for individual rights.
Omer Tene, Int. Assoc. of Privacy Professionals

Neil Ross, head of policy at tech industry body techUK, says that the current consultation “offer[s] a chance to collect a wide range of feedback to update how [Article 22] is applied as AI technology develops, giving more certainty to businesses as well as consumers”. 

Feedback on the consultation will help determine what will happen to Article 22. At one extreme, Brexit-supporting MPs like Iain Duncan Smith would like to remove the provision entirely. A report from the Duncan Smith-led Taskforce on Innovation, Growth and Regulatory Reform,  set up to explore the potential benefits of looser regulation following Brexit, claims that Article 22 makes it “burdensome, costly and impractical for organisations to use AI to automate routine processes”. 

The report argues that Article 22 should be scrapped and that “a focus should be placed on whether automated profiling meets a legitimate or public interest test”. If this is deemed “too radical”, “GDPR should at a minimum be reformed to permit automated decision making and remove human review of algorithmic decisions”. 

“This is not something the industry has been calling for, however we will examine all the proposals in the consultation closely,” says Ross. 

Is removing Article 22 worthwhile?

Article 22 is rarely applied by judges or data protection authorities in European courts. The first time was earlier this year, in a court case involving ride-hailing service Ola, when it was ruled that a solely automated decision was used to deduct wages from driver’s earnings. 

“Article 22 is rarely applied because many AI-driven decisions about people fall outside the scope of the provision,” says Frederik Zuiderveen Borgesius, professor of law at Radboud University Nijmegen. “Article 22 only applies to narrow categories of decisions about people, namely decisions with ‘legal’ or similarly far-reaching effects.

“Hence, it’s difficult to see how abolishing that provision would seriously increase innovation.” It could theoretically reduce companies’ compliance costs, he says, but there’s no guarantee that this would increase innovation.

On the other hand, the removal of Article 22 could reduce the protection of individuals, says Borgesius. “Article 22 ensures that the consumer, if a loan application is rejected by the computer, can ask that a human at the bank reconsiders the decision,” he says. “If article 22 were abolished, people would not have the right anymore to ask for human intervention.”

Any benefit in reducing a UK company’s compliance burden would also have to be weighed against the potential for endangering the UK-EU data agreement. If the UK’s proposal for a new data regime is not “essentially equivalent” to the EU’s data protections, the current data adequacy agreement could be invalidated.

Laurie Clarke

Senior reporter

Laurie is a senior reporter at Tech Monitor.