View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Comment
March 7, 2024

Unregulated AI could cause the next Horizon scandal

Without robust regulation, it will be even easier to make automated decisions without people’s knowledge or consent.

By Francine Bennett

Many politicians are understandably hopeful about how the recent wave of artificial intelligence (AI) could boost productivity and improve public services. The UK government has set up a new unit for AI innovation in the public sector. Labour is promising to use AI for everything from fraud prevention to reducing truancy.

Data and AI are increasingly embedded in our economy, public services and our communities. Algorithms can and are being used to make life-altering decisions, whether that’s about our employment, finances or exam results. The adoption of easily accessible general purpose AI tools like ChatGPT is accelerating the shift to a more automated, data-driven world.

Horizon data post box
The Post Office scandal underscores the dangers of integrating AI at speed. (Photo by Shutterstock)

But leaving important decisions to technology can go badly wrong, as the Post Office scandal showed. Hundreds of postmasters were prosecuted for theft and fraud on the evidence of flawed accounting software, Horizon, with severe consequences for them and their families.

This scandal should underscore the dangers of integrating AI into our economy at pace and uncritically. Where decisions such as hire-and-fire processes and loan applications are delegated to automated systems, they become less transparent and harder to explain. Systematic bias, technical failings or individual circumstances that don’t fit into the system’s map of reality can result in unfair outcomes. And without meaningful routes of redress people can’t appeal decisions.

If we delegate more decisions to AI without first putting in place the right safeguards, more lives could be affected by the ineffective governance of opaque and fallible technical systems. In the case of Horizon, the government is now taking welcome steps to address this miscarriage of justice. At the same time, however, it risks opening a legislative door to further unaccountable and unfair uses of technology.

On 6 February, the government confirmed it will not provide any immediate commitment to new law on AI, instead making further legislative interventions dependent on industry behaviour and further consultation. This falls short of what is needed. We shouldn’t be waiting for companies to stop cooperating or for a Post Office-style scandal to prompt the government and regulators to react.

Meanwhile, British reforms to data protection law in the Data Protection and Digital Information Bill, currently before the House of Lords, will weaken existing protections against automated decision-making. Since the Horizon scandal, the introduction of the General Data Protection Regulation (GDPR) has offered some legal protection against many types of automated decisions, such as loan approvals or hiring decisions. While limited and imperfect, these safeguards did introduce important opportunities for mitigating possible harms, as well as a paper trail to support future investigations. The threat of regulator fines and legal action incentivises organisations to take complaints seriously and to take action.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

Independent legal analysis commissioned by the Ada Lovelace Institute has found that, under the UK’s current proposals for AI regulation and data protection, these incentives will be eroded. It will be even simpler to make automated decisions about people without their knowledge, and without seeking their consent. This could make it easier for organisations to dismiss the concerns of whistleblowers like Alan Bates, who spearheaded the campaign for justice over the Post Office scandal.

The Horizon failure was about more than just inadequately tested software; it was fundamentally about bad governance. It demonstrated something that should be obvious: technology does not exist in a vacuum. It co-exists with people, and is embedded within institutions and power structures. Legally binding regulation is an important tool for reshaping these structures. It can strengthen the hand of individuals and communities in the face of power that is all too often unaccountable. Getting AI governance right means making those developing and deploying technology genuinely accountable for their impact on people and society.

A survey of the UK public carried out last year by the Ada Lovelace Institute found significant concern that an over-reliance on technology will negatively affect people’s agency and autonomy. More than half (59 per cent) of respondents said that they would like clear procedures in place for appealing to a human against an AI decision – but the only way of guaranteeing this is through legally binding requirements.

We have been shown what can go wrong in a “computer says no” society, but without amendment the government’s data protection reforms risk creating an “AI says so” society. Instead, as we enter an age of wider AI use, we should be looking to strengthen rights and protections, ensuring important decisions are subject to meaningful human review, and that personalised explanations are available to affected people.

To achieve this, the government must look again at the Data Protection and Digital Information Bill. It must work with legal experts, civil society and affected people to pass data protection laws fit for the AI era.

This article originally appeared as part of Spotlight in the New Statesman. 

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.