Microsoft will end the sale of so-called emotion-detecting artificial intelligence (AI) software and limit the use of facial recognition tools as part of a new framework for responsible use and implementation of artificial intelligence. It becomes the latest Big Tech firm to move away from the controversial techniques and attempt to counter the prospect of bias and discrimination in AI.

Microsoft has updated its rules around its AI products (Pic: Jean-Luc Ichard/iStock)

Why is emotion-detecting AI controversial?

Emotion-detecting AI software has proved controversial since its first inception, with many researchers claiming it has no scientific basis. Last year Microsoft launched a review of its accuracy, and Google blocked certain emotions that proved inaccurate from its own Google Cloud emotion-detection AI tools.

“These efforts raised important questions about privacy, the lack of consensus on a definition of ’emotions,’ and the inability to generalise the linkage between facial expression and emotional state across use cases, regions, and demographics,” Sarah Bird, Microsoft Azure AI unit product manager said in a blog post.

With the threat of stricter regulations looming, a number of technology providers have started to pull back from these uses of artificial intelligence. Last week Clearview AI, the start-up that took billions of facial images from the public web and made them searchable to customers including police agencies, reportedly cut most of its sales team as it grapples with litigation and difficult economic conditions.

Clearview was fined £7.5m by the UK’s Information Commissioner last month for collecting and storing images of UK citizens without consent. The company has also stopped selling to private businesses in the US in the face of legal action.

Microsoft AI standards: a move to ‘more trustworthy AI’

The latest move from Microsoft will see the firm remove unfettered access to facial recognition technology, offered within its Azure cloud software, with customers given a year before access is removed completely. In future, Azure customers wanting to use facial recognition, including to open doors or access websites, will need to request access.

This forms part of the wider publication of the new Microsoft Responsible AI Standards, which includes current best thinking on how AI can “respect enduring values” including fairness, reliability, inclusiveness, privacy, transparency and accountability.

Microsoft says its framework will “guide how we build AI systems”, describing it as “an important step in our journey to develop better, more trustworthy AI.”

Facial recognition technology will only be offered on a more narrow use case that respects the privacy of the end user, which is similar to the rules governing Azure’s speech technology that allows for the creation of synthetic voice sounds nearly identical to the source.

Speech-to-text technology will also come under the framework as “the potential of AI systems to exacerbate societal biases and inequities is one of the most widely recognised harms associated with these systems,” the firm said in a statement.

Microsoft says this is just the first step, explaining in a statement: “As we make progress with implementation, we expect to encounter challenges that require us to pause, reflect, and adjust. Our Standard will remain a living document, evolving to address new research, technologies, laws, and learnings from within and outside the company.“

Last year Google launched a similar review of its AI operations with Google Cloud, which saw it turn down a number of applications for use of the technology. This included an unnamed financial firm that wanted to use AI to decide whom to lend money to on the grounds that it could not guarantee it would be bias-free around race and gender. It has also blocked a feature of the AI that detects and analyses emotions over fears of cultural sensitivity.

Read more: Discrimination law must change to combat the impact of AI bias