The use of AI to manage employees’ work and productivity – otherwise known as algorithmic management – has been linked to unfair working practices at various tech giants. While not every employer is looking to squeeze every ounce of productivity out of their workers, decisions that impact workers’ lives should always involve human managers, experts say.
Algorithmic management is usually associated with tech companies using AI to squeeze maximum productivity from their workers. In some cases, it has been linked abusive or unjust management practices.
Last year, for example, Wired reported that couriers for food delivery service Uber Eats said they had been fired due to a facial recognition system, used to verify drivers, that was incapable of recognising black and ethnic minority employees.
Amazon employees have also reported being sacked by the company’s algorithmic HR system with no explanation, with one worker from the UK saying the technology watching them affected their mental health and made them feel like they were “a machine”.
But it’s not just the tech giants that are using algorithmic management. Mary Towers, policy officer and AI specialist for the Trades Union Congress (TUC), says the technology is being used across all sectors. “We haven’t come across one of our unions yet where they haven’t reported the use of algorithmic management in one way or another,” she says.
“It takes different forms depending on the sector,” Towers adds. “For example, in warehouse and retail it includes tracking, automated scheduling and performance analysis that could impact on rates of pay and benefit.
“In other sectors such as education, lectures are recorded and our workers don’t know whether facial recognition technology is being used for performance management and to analyse emotions from the teachers. The point is that the implications are already being lived by workers.”
Algorithmic management needs human oversight
AI has its benefits in workforce management. “There are two obvious benefits on offer,” government body ACAS found in a 2020 report: “improved productivity through time saved and more efficient decision-making; and new insights into workplace behaviour, human relationships or other trends as a result of vast data processing.”
These new insights could include spotting high-potential candidates, or identifying employees who might be about to leave, explains Paul Henninger, global head of KPMG Lighthouse, the consultancy’s data centre of excellence. “You can use predictive analytics to identify people that might be on the way out,” he explains. “The trick is to understand it accurately and do it early.”
AI-powered decision-making can also be predictable, so in some ways fairer. “Workers can claim a manager can be fickle, treating an employee poorly one day and better the next,” says Aiha Nguyen, programme director of the Labor Futures Initiative at think tank Data & Society. “That doesn’t happen with AI.”
But AI-powered management becomes harmful when it is used a replacement for human decision-making. “Algorithms should be used to advise and work alongside human line managers but not to replace them,” ACAS advised in its report. “A human manager should always have final responsibility for any workplace decisions.”
This is not always the case, however, says Nguyen. AI is often used to assess the output and quality of an employee’s work without human oversight, she says. “The part that concerns many is the evaluation of employee work without human intervention.”
Companies often automate management decisions to save time, she explains. “One of the aspects that is different about algorithmic management as opposed to human management is the ability to provide instant feedback, but that can come at a price.”
AI-powered management calls for cross-functional teams
Uber and Amazon are extreme cases of the use of AI in workforce management, says Bo Lykkegaard from analyst company IDC, and not all companies are trying to squeeze every last drop of productivity out of their workers.
In the context of the great resignation, most bosses will be more concerned about recruitment, engagement and retention, he argues. “The worry of a CEO is not ‘is my workforce doing enough?’ but rather ‘how do I fill vacancies and keep staff from wanting to leave?’.”
As a result, he argues, AI will be mostly used to add value to existing HR processes, not replace human interaction.
Nevertheless, the risk of bias and unfair treatment are such that algorithmic systems need to be deployed with both technology and HR expertise, says Penninger. “Best practice we recommend is whoever is leading a group of people working together needs to have an understanding of the data,” he says. “You need a team of data, tech, business and HR people working in a multifunctional team.”