‘I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is today,’ wrote the economist John Maynard Keynes in 1930. He predicted that technological developments and automation would bring a brighter future for workers — estimating that people would work no more than 15 hours per week by 2030.
That still seems like a lofty goal just seven years out from Keynes’ target. But rapid improvements in AI might finally be on the cusp of bringing the necessary ingredients for the full-scale workplace transformation foreseen by the economic theorist. The question now is whether this will bring us closer to Keynes’ post-work utopia or, as some fear, trigger waves of workplace chaos and redundancies.
AI is already reshaping how Britain works, from the algorithmic management techniques Uber uses to govern its drivers, to the questionable emotion-recognition software used by some HR departments in the hiring process. Some of the UK’s leading unions fear that these kinds of applications will ultimately end up favouring the priorities of management over staff, to the severe detriment of the latter. But representatives from some of the country’s biggest trade unions also told Tech Monitor that AI isn’t inherently incompatible with their ceaseless campaign to hold management to account.
“We’re not anti-technology,” says Laurence Turner, head of research and policy at GMB. “In fact, our members are more likely to say that, over the past five years, new technology has made their jobs better rather than worse.” For that to continue, adds Turner, tough legislation is needed — not only to curb the worst cost-cutting impulses of bosses, but also to preserve human agency in the workplace. “We are on the cusp of profound change in a lot of jobs,” he says. “We think that the statutory framework and the industrial relations mechanisms just aren’t ready at this moment.”
AI is already changing the workplace
There are a host of much-cited concerns about how AI might upend the workplace — not least the risk of eliminating jobs and impoverishing workers. Unions also fear that decontextualized algorithmic decision-making could intensify work, with severe safety implications for physical labourers, or negatively impact workers’ mental well-being. What’s more, many in the movement are alarmed at the prospect of machine-learning systems echoing discriminatory ideals embedded deep within their training data.
Unions like GMB are, says Turner, particularly concerned about the so-called ‘black box effect’ where a machine-learning model might be used to make recommendations — including about hiring and firing — “that neither the individual worker, their representative, or management fully understand”.
Nevertheless, unions say they’re also keen to harness the technology’s transformative power to improve working conditions. “AI, of course, is here to stay,” says Kate Dearden, head of research at Community. “It’d be wrong for us to completely write off AI as a threat to be prevented. We’d only be letting down our members and industries by doing that.”
If implemented properly, says Dearden, AI could make jobs more efficient and flexible and — perhaps most importantly — put “more money in our members’ pockets.” AI could also make jobs more rewarding by eliminating monotonous tasks. It might even make workers safer, not only by automating dangerous physical activities but also by highlighting when workers might be at risk and directing them to switch positions or change techniques.
Some unions are already using AI themselves. Tools like WeClock — which enables workers to self-track their working hours, breaks, and commute times — can help facilitate ‘collective digital action’ that weaponises workers’ own data in their favour. This can offer evidence for campaigning by pooling data and making it easier to spot any unfairness in the workplace.
The potential risks and rewards differ across industries. Equity, which represents some 47,000 performers across the UK, is particularly concerned with the rise of generative AI. Without strong industrial agreements, the actor’s union fears that digital cloning could steal already-sparse roles from working actors — without paying the performers that inspired these models. But it’s important to remember that performers can also reap big rewards from AI, says Liam Budd, who leads the New Media team at Equity.
“The use of interactive digital humans could allow our performers to appear in multiple productions around the world and boost their income,” argues Budd. AI might also increase accessibility for disabled actors, who might need further flexibility, and enhance safety for stunt performers.
Regulation is trailing behind, say unions
“If it’s applied ethically and responsibly in collaboration with trade unions and the workforce, then AI actually has the potential to impact our members’ lives in a positive way,” says Budd. But regulations might fall far behind this objective. The Trades Union Congress (TUC) issued a stark warning to this effect in April, arguing that ministers had failed to introduce the necessary regulations to safeguard workers’ rights amid the rapid rise of workplace AI.
The TUC, which represents more than five million workers across England and Wales, remains dissatisfied with provisions in the government’s Data Protection and Digital Information Bill, the UK’s post-Brexit replacement for GDPR, as well as the National AI Strategy and the recent white paper from the Department for Science, Innovation and Technology, which critics argue favours Big Tech. Prospect’s deputy general secretary Andrew Pakes is just as excoriating. “The government’s AI strategy is silent on workers,” he says. “It talks about innovation but not industrial change or investment in it.”
The TUC is calling for legislation mandating that employers tell their staff when — and how — AI is being implemented in their workplaces. They also argue that workers should be entitled to a human review of any decisions made about them by a machine-learning model — especially when it comes to hiring and firing. Staff don’t want to face a scenario “where [they] feel that they can’t challenge decisions made by technology,” says Mary Towers, who leads a project on AI at the TUC. Turner agrees. “I think we only need to look at the Post Office Horizon Scandal for an example of what can happen when an IT system is seen as too complicated to challenge or too big to fail,” he says.
Turner, Pakes, Dearden, and Towers all told Tech Monitor that robust legislation needs to emphasise physical and mental well-being in the workplace — not just productivity gains. “At the moment the debate feels very fixed on the number of jobs that could be lost or created, with less focus on the quality of that work,” says Turner. In an April 2022 survey by GMB, 32% of 1,620 respondents said that workplace surveillance, including by automated systems, negatively impacted their mental health. Similarly, Towers highlights that so-called ‘algorithmic management’ — which might fail to account for a worker’s individual context — can leave workers feeling powerless, resulting in increased stress and anxiety.
Unions have been left playing catch-up
In the meantime, trade unions are playing catch-up. For one thing, Pakes argues, the legal framework for employment rights is now outdated, having been written in a century where legislators were more focused on physical risks than unseen digital dilemmas in the workplace. Unions are also struggling to keep up with the pace and complexity of AI themselves. “There is an understanding gap at the moment,” says Turner. “If you’re a trade union officer — or, for that matter, a management representative — trying to understand the outputs of an unsupervised machine learning model in employment is something which most of us just don’t have the training in.”
For its part, GMB is investing resources to improve its representatives’ understanding of how AI systems work, says Turner. Community’s organisers also recently received training on AI from the Why Not Lab, a specialist consultancy firm that aims to help progressive organisations navigate the digital transformation — a process that Dearden says helped remove some of the fear representatives had around engaging with this complex topic.
Unions want to be able to have their say whenever new technologies are on the table. Without that kind of diversity of input, argues Pakes, the entire economy loses out: “The UK is missing out on its potential because not everyone has a seat at the table in driving forward this new technology.”
That’s why it’s so important that unions be consulted on when — and how — AI will be used to make decisions in the workplace, says Dearden. It’s the workers on the shop floor and the production line who are best placed to understand the kinds of blockages that AI could, if implemented effectively, help overcome. “Technology change has to be brought into the scope of collective bargaining and recognised as an area in which worker consultation is legally required,” argues Dearden. “We just don’t think employers will take it seriously otherwise.”