Public opinion towards technology varies significantly across the globe – a recent study by the Pew Research Center revealed people in Asia are substantially more positive about AI and automation than their peers in the West. These attitudes shape how technology is perceived and adopted, and leaders must engage with them in good faith when implementing technologies that impact people’s lives.
Public opinion on AI and automation
The Pew Research Center study shows stark global differences in public attitudes towards robotics and AI. When asked their opinion on the technologies, more than 60% of respondents in Singapore, South Korea, Taiwan and Japan said they were good for society.
By comparison, fewer than half of respondents in the US, Canada, and much of Western Europe shared this positive view. France has the most negative opinion of AI and robotics overall, with less than 40% thinking the technologies have been good for society.
These attitudes affect the extent to which given technologies will be accepted within a society, and many have failed after clashing with public opinion. Low adoption of Covid-19 tracing apps in the UK and US, for example, can be attributed in part to distrust of the location-tracking technology they rely on.
Vargha Moayed, chief strategy officer of robotic process automation (RPA) software vendor UiPath, says national attitudes shape the reception of its technology by clients and their employees.
“Japan is the most forthcoming and there is no fear of robots per se,” he explains. “Japanese companies have always been very good at reskilling their people, so they just accept it and even embrace AI and technology.”
By contrast, many European countries tend to be more concerned with the way AI and automation will affect citizen well-being and the ethical questions raised by technology, Moayed explains.
As a result, UiPath pays careful attention to the views of its clients’ employees – which have “strong correlation” with public opinion, Moayed says – when implementing its systems, as they have a significant impact on a successful transition to automation.
The roots of technology mistrust
Sometimes, public mistrust of a given technology reflects deep-seated cultural values. Baobao Zhang, who researches AI governance and public opinion on AI at Cornell University, has found people often misunderstand what AI and machine learning are, and instead base opinions on cultural or community attitudes and “gut instinct”.
Patrick Sturgis, a professor of quantitative social science at the London School of Economics, says this can be seen in public opinion research. People often support science and technology in the abstract, but their feelings about specific advancements will vary according to media portrayals as well as their previously held beliefs.
“These are areas where… science and technology tend to come into conflict with people’s core values,” says Sturgis. “Obviously religion is one important marker, but they can be kind of humanist values as well.”
The Pew Research study found that, in every country except India and Russia, those with a higher education level are more likely to support AI and robotics. Sturgis says there is evidence that a higher education level makes people more trustworthy of science in general, too. “As a university graduate, you are more likely to understand the process of science, to reject conspiracy theories about science,” he says.
But technology leaders must not dismiss public mistrust of tech as irrational or uneducated. “There are things that people should be wary of, and rightfully so,” says Zhang, such as racial bias in AI systems used by law enforcement.
Sturgis adds that people with a lower socio-economic status might rightly question whether they will benefit from technological advancements. “There’s a justified suspicion that ‘we’re not going to gain from this, someone’s going to gain but it’s not going to be us’,” he says. “If there’s going to be oil taken out of the ground near here, am I going to have cheaper energy bills? Probably not, but I might have a smoke-filled environment and trucks going by.”
Building trust in innovation
Perhaps recognising that people’s concerns about innovations are often well founded, many scientific organisations are changing how they engage with the public, away from ‘educating’ towards a conversational approach, says Sturgis.
“I think over the past few decades that’s changed to a more kind of… dialogic approach where the idea is to engage and involve and have a two-way conversation so that people are not being spoken at, but are part of the whole process,” he says.
But there is a difference between good-faith engagement and a public-relations exercise designed to curtail a possible backlash. “Trust in science really should be based on trustworthiness of the scientific actors rather than just promoting trust,” he says.
Zhang says that she is seeing an increasing movement from tech companies to be more transparent, allowing experts from outside the company to test out new developments and find flaws. However, some have questioned the extent to which voluntary commitments to transparency will prevent organisations from using AI harmfully.
UiPath advises its clients to pursue “bottom-up” change management programmes when introducing automation. This means involving employees from the beginning of a transition and allowing them to try out the software themselves.
But he also says clients need to acknowledge the “legitimate apprehension” of employees who will have seen huge technological change within their lifetimes. “It’s hypocritical to say that technology does not destroy some categories of jobs,” Moayed says. “You need to manage that transition, in terms of being able to provide opportunities for people that are going to be displaced to acquire new skills.”
“Trying to slow down innovation is silly,” he adds. “On the other hand, being naive about it and believing that everything will just take care of itself is also, in the 21st century, unacceptable.”