View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
October 3, 2018updated 28 Jun 2022 6:23am

AI Bias “More Dangerous than Killer Robots”

"Well-intended systems may have unintended consequences. As a business leader you will need to think for your company, ‘What are the potential financial and legal risks?"

By jonathan chadwick

Bias in AI is more dangerous than killer robots, according to deep learning specialist,  Nvidia’s Charlotte Han, and limited data samples could be profoundly damaging for companies looking to deploy an AI product.

Speaking at IPExpo Europe in London on Wednesday, Han said business leaders need to employ a diverse team to work on AI products in order to avoid biases.

“Killer robots are nothing compared to bias in AI,” She said. “Besides having global data we also need a team of people with diverse backgrounds. If you’re designing a global product, you should include global data.”

See also: IBM Releases “Black Box” Breaker 

Biases, such as selection biases, interaction biases, or similarity biases, can lead to financial or legal difficulties deploying AI on a large, professional scale, she said.

Han showed a graph of the gender imbalance in AI research across 23 countries, based on 4,000 items of research published at leading conferences.

Nations such as the US, Canada, Japan, China, and France all had an 85 percent or over proportion of men working on AI – making research biases highly probable. This can especially be a problem when designing an AI product specifically for women.

“As a woman, I cannot say I understand men fully and I can design the perfect product for you… It’s the same for men working in AI.”

Content from our partners
Green for go: Transforming trade in the UK
Manufacturers are switching to personalised customer experience amid fierce competition
How many ends in end-to-end service orchestration?

Even big companies with seemingly unlimited data sets can fall to biases because of natural human biases of humans during the process of creating an AI, Han said.

Microsoft Azure’s facial recognition system, for example, wrongly identified Michelle Obama as a man, because the company didn’t have data sets of black women in the data set.

“Well-intended systems may have unintended consequences. As a business leader you will need to think for your company, ‘What are the potential financial and legal risks?’

“This will be especially important and obvious in the financial industry and also maybe healthcare.”

Companies will have to manage disruption, embrace change, and rethink privacy and security. Han also humans need to be part of decision making process, describing them as “the goalkeepers of the future in AI”.

“In the past the AI was written by hand, it was hardwired. So if you have AI bias, you can easily spot it and then remove it. However in the future, the AI is going to make the system change behaviour over time. So it will be harder and harder to spot the consequences of bias.”

Humans are still needed in the age of AI to explain an AI system’s decisions to customers, and to “call shots about what’s fair, what’s legal, what’s ethical.

“Therefore it’s very important to have an explainable system so that you can tell your customers why you’re making these decisions.”

Topics in this article : , , , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU