View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Don’t buy emotion-analysing AI, ICO warns tech leaders

The Information Commissioner is developing a set of biometric guidelines but warns emotional analysis AI will never work.

By Ryan Morrison

The Information Commissioner’s Office (ICO) has warned companies to avoid buying emotion analysing artificial intelligence tools as it is unlikely the technology will ever work and could lead to bias and discrimination. Businesses that do deploy the technology could face swift action from the data regulator unless they can prove its effectiveness.

Emotional AI can be used to monitor the health of workers via wearable devices. But the technology is otherwise unproven, the ICO says (Photo by LDprod/Shutterstock)

Emotional analysis technologies take in a number of biometric data points including gaze tracking, sentiment analysis, facial movements, gait analysis, heartbeats, facial expressions and skin moisture levels and attempts to use that to determine or predict someone’s emotional state.

The problem, says deputy information commissioner Stephen Bonner, is that “there is no evidence this actually works and a lot of evidence it will never work,” warning that it is more likely to lead to false results that could cause harm if a company relies on the findings.

He told Tech Monitor that the bar for a company being investigated if it does implement emotional analysis AI will be “very low” due to the warnings being issued now.

“There are times where new technologies are being rolled out and we’re like, ‘let’s wait and see and gain a sense of understanding from both sides’ and for other legitimate biometrics we are absolutely doing that,” Bonner says. But in the case of emotional AI, he adds that there is “no legitimate evidence this technology can work.”

“We will be paying extremely close attention and be comfortable moving to robust action more swiftly,” he says. “The onus is on those who choose to use this to prove to everybody that it’s worthwhile because the benefit of the doubt does not seem at all supported by the science.”

AI emotional analysis is useful in some cases

There are some examples of how this technology has been applied or suggested as a use case, Bonner says, including for monitoring the physical health of workers by offering wearable tools, and using the various data points to keep a record and make predictions about potential health issues.

Content from our partners
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline

The ICO warns that algorithms which haven’t been sufficiently developed to detect emotional cues will lead to a risk of systematic bias, inaccuracy and discrimination, adding that the technology relies on collecting, storing and processing a large amount of personal data including subconscious behavioural or emotional responses.

“This kind of data use is far more risky than traditional biometric technologies that are used to verify or identify a person,” the organisation warned, reiterating the lack of any evidence it actually works in creating a real, verifiable and accurate output.

Bonner says the ICO isn’t banning the use of this type of technology, just warning that its implementation will be under scrutiny due to the risks involved. He told Tech Monitor it is fine to use as a gimmick or entertainment tool as long as it is clearly branded as such.

“There is a little bit of a distinction between biometric measurements and inferring things about the outcome intent,” he says. “I think there is reasonable science that you can detect the level of stress on an individual through things in their voice. But from that, determining that they are a fraudster, for example, goes too far.

“We would not ban the idea of determining who seems upset [using AI] – you could even provide them extra support. But recognising that some people are upset and inferring that they are trying to commit fraud from their biometrics is certainly something you shouldn’t be doing.”

Cross-industry impact of biometrics

Biometrics are expected to have a significant impact across industries, from financial services companies verifying human identity through facial recognition, to voice recognition for accessing services instead of using a password.

The ICO is working on new biometrics guidance with the Ada Lovelace Institute and the British Youth Council. The guidance will “have people at its core” and is expected to be published in the spring.

Dr Mhairi Aitken, ethics research fellow at the Alan Turing Institute welcomed the warning from the ICO but says it is also important to look at the development side of these systems and make sure developers are taking an ethical approach, creating tools where there is a need and not just for the sake of it.

“The ethical approach to developing technologies or new applications has to begin with something about who might be the impacted communities and engaging them in the process to see whether this is really going to be appropriate in the context where it’s deployed,” she says, adding that this process gives us the opportunity to be aware of any harms that may not have anticipated.

Emotion-detecting AI – a ‘real risk of harm’

The harm that could be caused by such AI models is significant, especially for people who might not fit the ‘mould’ developed when building the predictive models, Dr Aitkin says. “It is such a complex area to begin to think about how we would automate something like that and to be able to take account of cultural differences and neurodivergence,” she adds.

AI systems could find it difficult to determine what is an appropriate emotional response in different contexts, Dr Aitkin says. “We display our emotions very differently depending on who we’re with and what the context is,” she says. “And then there are also considerations around whether these systems could ever fully take account of how emotions might be displayed differently by people.”

Unlike Bonner, who says there is minimal harm in using emotional AI tools in entertainment, Dr Aitken warns that this use case comes with its own set of risks, including people becoming accustomed to the technology and thinking it actually works. “It needs to be clearly labelled as entertainment,” she warns

When it comes to emotional AI, the problem is there are too many data points and differences from one human to the next to develop a model, Bonner adds. This is something that has been shown in multiple research papers on the technology.

“If someone comes up to us and says, ‘we’ve solved the problem and can make accurate predictions’, I’ll be back here eating humble pie and they’ll be winning all of the awards but I don’t think that is going to happen,” he says.

Read more: The EU wants to make it easier to sue over harms caused by AI

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU