View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
January 4, 2021updated 01 Aug 2022 5:34am

Voluntary frameworks will not protect against algorithmic bias

Voluntary measures alone will not prevent organisations from using artificial intelligence in discriminatory ways, experts have warned.

By Cristina Lago

The societal risks posed by artificial intelligence (AI), especially the risk that its use will entrench inequality and discrimination, are now widely acknowledged. But it has yet to be determined how society will manage those risks. Businesses, regulators and industry bodies have developed frameworks that define ethical behaviour. But while these frameworks are an essential guide for companies planning their AI implementations, panellists at a recent debate argued that voluntary measures alone will not be enough to contain the danger of AI bias.

‘Naive optimism’ on AI bias

Academics and activists have succeeded in raising public awareness of the risk that automated decision-making systems reflect the biases of the people who create them, intentionally or otherwise. In areas such as policing or financial services, these systems can gravely impact the lives of individuals, and AI bias has the potential to entrench social injustices.

The “naive tech optimism” that assumes new technologies will result in positive social outcomes is no longer acceptable, said Dr Kanta Dihal researcher and investigator at Global AI Narratives in the Leverhulme Centre for the Future of Intelligence of the University of Cambridge at the Digital Ethics Summit 2020, hosted by trade body techUK.

Technology innovation, and the debate about its contribution to society, have historically been led by a narrow group of people, Dihal argued. As a result, the fact that technologies often serve the interests of this exclusive group has escaped scrutiny.

But that is changing, Dihal said. The protests that followed the UK’s decision to base school pupils’ exam results on an algorithm, that included the previous performance of their school, show that automated decision-making is increasingly subject to public oversight.

Bias in algortihmic decision-making

Student protest in London, UK, in August 2020 after A-level and GCSE grades awarded by teachers were downgraded by a government algorithm during Covid-19. (Photo by Chris J Ratcliffe/Getty Images)

“Protests and walkouts have shown the biases and inequalities in all kinds of technologies,” she said. “These historical perspectives are being acknowledged and it has become undeniable that they matter.”

AI governance beyond voluntary frameworks

In this environment, businesses and government organisations must ensure their use of AI is fair and equitable. Ethical frameworks, by which organisations or industries define and make commitments to ethical behaviours, are important tools to help them achieve that.

Content from our partners
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline

“Companies should not be creating AI unless they have a framework and a governance model in order to get to that in just a second around the creation of the AI,” said Allyn L Shaw, president and CTO at Recycle Track Systems, at the Digital Ethics Summit. “That governance model means that you have to have people that are represented and the data set of people that you expect that AI to engage represented in that design process.”

Examples of such frameworks include Singapore’s Model AI Governance Framework or the call by American investor Stephen Schwarzman to create a “framework for addressing the impacts and the ethics” of AI.

Some of the greatest designers of these regulations and frameworks are also some of the greatest perpetrators of unethical behaviour.
Renée Cummings, University of Virginia

There is scepticism, however, that these frameworks will be enough to prevent companies from using AI in discriminatory but potentially profitable ways. Without accountability, voluntary frameworks are just a piece a paper, Renée Cummings, data activist in residence at The School of Data Science, University of Virginia and community scholar Columbia University, said at the summit.

“Some of the greatest designers of these regulations and frameworks are also some of the greatest perpetrators of unethical behaviour, as we’ve seen in recent times,” said Cummings. “Up until recently, we saw major car companies releasing ethical frameworks about how autonomous vehicles should operate. But then again, it’s because they are perpetrating the same things they’re speaking against.”

Instead, Cummings argued, organisations need to be held legally responsible for their use of AI.

AI legislation in the works

Policymakers are beginning to explore what this might look like. The UK’s Centre for Data Ethics and Innovation (CDEI) recently published a review into bias in algorithmic decision-making in key sectors, namely policing, local government, financial services and recruitment. The review contained advice for regulation, including that the UK government should issue guidance on how the country’s Equality Act applies to automated decisions.

Two years earlier, the House of Commons Science and Technology Select Committee Report ‘Algorithms in Decision-Making’ made recommendations to ensure accountability and transparency – including the creation of an enforceable “right to explanation”, which citizens can use to see how machine-learning programmes get to decisions that affect them.

Similarly, the European Parliament has adopted proposals on the regulation of AI. The proposals included a call for the EU Commission to present “a new legal framework outlining the ethical principles and legal obligations” of organisations using AI. Another called for the creation of a “civil liability framework” that makes companies operating “high risk” AI “strictly liable for any resulting damage”.

The process of legislating against AI bias is likely to be prolonged, not least because of the complexity of its various applications and potential harms. Organisations that use AI should draw guidance from voluntary ethical and governance frameworks for now but they should expect more binding measures in future.

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU