A ‘trust gap’ is emerging between business leaders and staff over the potentiality and implementation of AI in the workplace, according to a new survey from Workday. The poll by the enterprise cloud applications provider found that staff are generally more sceptical about AI compared with senior leaders, with only 52% of employees welcoming the increased use of the technology in their companies compared to 62% of C-suite executives. Meanwhile, only 23% of the former are confident that their organisation is putting their interests above the company’s when implementing AI. 

“There’s no denying that AI holds immense opportunities for business transformation,” said Workday’s chief technology officer, Jim Stratton, in response to the study’s findings. “However, our research shows that leaders and employees lack confidence in, and understanding of, their organisations’ intentions around AI deployment within the workplace.”

A manager leads a meeting in an office, in a photo used to illustrate a story about an emerging 'AI trust gap' between management and staff.
A new survey by Workday has revealed a “trust gap” between management and employees over the implementation of AI projects in the workplace. (Photo by Ground Picture/Shutterstock)

An incipient AI trust gap

The survey of 1,375 business leaders and 4,000 employees across North America, Asia and the EMEA region found broad support for AI’s potential in the workplace. Both management and staff were broadly aligned in their belief that AI would be transformational for their business (67% and 59% respectively), or at least would result in some form of positive growth (44% and 37%.) The two groups were like-minded, too, about the need to bind AI applications to some form of human control, with 70% of leaders and 69% of employees either agreeing or strongly agreeing with this sentiment. 

Both groups were more divided, however, on the practicalities of implementing AI solutions throughout their business. Only 42% of employees, for example, believed that their company had a clear understanding of which manual processes and systems should be automated using the technology. There also appears to be a knowledge gap between AI implementation strategies espoused by senior leaders surveyed and employees, with 37% of the former stating that their business regularly reviews and updates AI use cases and policies, compared to 25% of staff claiming the same thing. 

AI trust and safety 

More disturbingly, three in four employees surveyed said that their organisation was not collaborating on AI regulation, while four in five said that their senior leadership had yet to share guidelines on the appropriate use of AI applications in the workplace. This chimes with findings from separate research recently conducted by data consultancy Caruthers & Jackson, which found that 41% of organisations it surveyed had little or no data governance framework currently in place, and a study by MIT Technology Review Insights, which concluded that “internal rigidity” was preventing CIOs from efficiently implementing AI in their businesses. 

These findings come amid heightened global interest in AI regulation, with many governments struggling to thread the needle between encouraging innovation in the technology and building guardrails to prevent it from inadvertently discriminating against workers or entire industries. The latter concern was recently echoed by the UK’s Trade Union Congress, which called for urgent new legislation to protect workers’ rights as more AI systems are implemented across the private sector.

Read more: AI will ‘harm workers’ without strict rules, TUC warns