Bias in AI is more dangerous than killer robots, according to deep learning specialist,  Nvidia’s Charlotte Han, and limited data samples could be profoundly damaging for companies looking to deploy an AI product.

Speaking at IPExpo Europe in London on Wednesday, Han said business leaders need to employ a diverse team to work on AI products in order to avoid biases.

“Killer robots are nothing compared to bias in AI,” She said. “Besides having global data we also need a team of people with diverse backgrounds. If you’re designing a global product, you should include global data.”

See also: IBM Releases “Black Box” Breaker 

Biases, such as selection biases, interaction biases, or similarity biases, can lead to financial or legal difficulties deploying AI on a large, professional scale, she said.

Han showed a graph of the gender imbalance in AI research across 23 countries, based on 4,000 items of research published at leading conferences.

Nations such as the US, Canada, Japan, China, and France all had an 85 percent or over proportion of men working on AI – making research biases highly probable. This can especially be a problem when designing an AI product specifically for women.

“As a woman, I cannot say I understand men fully and I can design the perfect product for you… It’s the same for men working in AI.”

Even big companies with seemingly unlimited data sets can fall to biases because of natural human biases of humans during the process of creating an AI, Han said.

Microsoft Azure’s facial recognition system, for example, wrongly identified Michelle Obama as a man, because the company didn’t have data sets of black women in the data set.

“Well-intended systems may have unintended consequences. As a business leader you will need to think for your company, ‘What are the potential financial and legal risks?’

“This will be especially important and obvious in the financial industry and also maybe healthcare.”

Companies will have to manage disruption, embrace change, and rethink privacy and security. Han also humans need to be part of decision making process, describing them as “the goalkeepers of the future in AI”.

“In the past the AI was written by hand, it was hardwired. So if you have AI bias, you can easily spot it and then remove it. However in the future, the AI is going to make the system change behaviour over time. So it will be harder and harder to spot the consequences of bias.”

Humans are still needed in the age of AI to explain an AI system’s decisions to customers, and to “call shots about what’s fair, what’s legal, what’s ethical.

“Therefore it’s very important to have an explainable system so that you can tell your customers why you’re making these decisions.”