View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
August 3, 2021updated 04 Aug 2021 3:08pm

Could ‘bias bounty’ competitions tackle AI discrimination?

Twitter is applying the 'bug bounty' model to AI bias. It is good idea, experts say, but the social network's response must be carefully managed.

By Pete Swabey

Twitter has launched a ‘bias bounty’ competition, offering cash rewards to anyone who can show how one of its algorithms might be harmful. The model has proved successful in the context of cybersecurity, but must be matched with careful communication and a commitment to change, experts told Tech Monitor.

ai bias bounty

Twitter has shared its image cropping algorithm so researchers can assess the risk of harm. (Photo by Koshiro K/Shutterstock)

Late last week, Twitter revealed that it is working with “hacker-powered” security testing platform HackerOne to run a competition to identify bias in an image-cropping algorithm. The Algorithmic Bias Bounty Challenge offers prizes of up to $3,500 to “demonstrate what potential harms such an algorithm may introduce”.

The algorithm was trained using eye-tracking data to identify the most ‘salient’ part of an image. Last year, users noticed that it was more likely to highlight white people’s faces than those of people of colour. At the time, Twitter said it had tested the system for bias but that “it’s clear that we’ve got more analysis to do”.

In subsequent testing, Twitter found that the algorithm was indeed more likely to favour images of white people over black people. As a result, it decided to allow users to crop their images themselves. “Not everything on Twitter is a good candidate for an algorithm, and in this case, how to crop an image is a decision best made by people,” wrote Rumman Chowdhury, director of Twitter AI ethics unit META at the time.

Now, Twitter has published the code for the image-cropping algorithm for competition participants to assess. It hopes to cultivate a community focused on ML ethics that is comparable to the ethical hackers who participate in bug bounty programmes, the company said. "With this challenge, we aim to set a precedent at Twitter, and in the industry, for proactive and collective identification of algorithmic harms," wrote Chowdhury last week.

AI bias bounty: lessons from cybersecurity

The idea of bias bounty competitions was proposed by AI researchers in a paper last year. "If companies were more open earlier in the development process about possible faults, and if users were able to raise (and be compensated for raising) concerns about AI to institutions, users might report them directly instead of seeking recourse in the court of public opinion," they wrote. Earlier this year, the UK government included bias and safety bounties "where 'hackers' are incentivised to seek out and identify discriminatory elements" among a list of practical steps organisations can take to deliver fair AI-based services.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

Bug bounties, in which companies reward researchers for identifying security flaws, have transformed the way the technology sector approaches vulnerabilities, says Andrew Cormack, chief regulatory officer at Jisc, the shared digital services provider for the UK higher education sector. "In the early days, there was no communication [between vendors and security researchers] other than hostile communication." Today, proactively engaging with the security research community is standard practice, says Cormack, who previously ran the incident response team at Janet, Jisc's high-speed network.

A bounty competition is not a replacement for structured testing and analysis, however, and Cormack advises against issuing one until internal and third-party checks - such as AI audits and 'red team' exercises - have been completed. But they allow organisations to subject their systems to challenges they might not have considered on their own, he says.

You've got quite a considerable challenge if [competition participants] are angry and motivated about the bias they think they've uncovered.
Andrew Cormack, Jisc

One lesson from 20 years' worth of bug bounty competitions, Cormack says, is that they "move some of the technical challenges [of testing] into communications challenges". An organisation issuing a bounty needs a process to handle duplicate reports, for example, and challenges they disagree with. They should also be transparent about what they are able to change following the competition and their timescale for doing so. "You've got quite a considerable challenge if [competition participants] are angry and motivated about the bias they think they've uncovered," Cormack says.

Bias bounty competitions: no silver bullet for ethical AI

Carly Kind, director at AI research body the Ada Lovelace Institute, welcomed Twitter's pursuit of innovation in tackling AI bias. "We're excited at the prospect of shifting the priorities of existing engineering/hacker communities and thinking of novel ways to reward those already doing this work," she said. "It's also good to see recognition that an internal team alone is often insufficient to identify all the risks of a system and attempts to find ways to bring other experiences in."

But she also noted that there is no single 'silver bullet' for ethical AI. "Alongside the use of impact assessments, transparency cards, third-party audits, and other accountability mechanisms, bug bounties should be one tool in the toolbox of methods that tech firms use to assess, identify, and mitigate the risks of AI systems throughout the product lifecycle."

While it's "good to see people trying new things," the technical nature of the bug bounty model could exclude the communities who are adversely affected by AI bias, warns David Barnard-Wills, senior research manager at consulting and technology development company Trilateral Research. "If you want to [reach] those communities that are suffering discrimination from artificial intelligence systems, putting it in this sometimes quite exclusionary language – the language of hacking – might not be the best approach."

He adds that the relatively low bounties on offer risks establishing an imbalance in the AI development community. "If you're not paying people the same amount to find a bias as you are to develop the algorithm, then you're going to find a sort of structural imbalance in the industry."

Barnard-Wills observes that current solutions to AI bias are either high-level frameworks that define ethical AI, or technical tools to measure algorithmic bias. What's missing, he says, are tools that help developers put ethical AI principles into practice when creating algorithmic systems. "We've got these high-level principles. How do you turn those into functional design requirements that you can give to a data engineer or a platform architect?"

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU