View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

AI will ‘harm workers’ without strict rules, TUC warns

The federation of trade unions says workers face discrimination with no redress unless tougher protections from AI are introduced.

By Ryan Morrison

Artificial intelligence has the potential to harm workers and limit rights if governments don’t act to introduce fair and clear rules for its use. That is a warning from the Trades Union Congress (TUC) which says while the technology might be transformative, “if left unchecked will lead to greater discrimination.”

Employees could face discrimination with no redress if AI is left unchecked, warns the TUC (Photo: Gorodenkoff/Shutterstock)
Employees could face discrimination with no redress if AI is left unchecked, warns the TUC (Photo: Gorodenkoff/Shutterstock)

A report by investment bank Goldman Sachs predicted that up to 300 million jobs could be either lost or degraded by generative AI tools like ChatGPT or Midjourney. OpenAI, the Microsoft-backed company behind ChatGPT says up to 80% of workers could see their jobs impacted by AI.

This isn’t just jobs being directly replaced by artificial intelligence or made irrelevant due to AI tools. The predictions cover impact on jobs, which could include changes to the way a person does their job, how they might be hired or even the introduction of AI monitoring and oversight.

The latter part of that impact is what concerns the TUC, which warns AI is being used to make “high-risk, life changing” decisions including hiring and firing. Adding that it is also used in workplaces to analyse facial expressions, tone of voice and accents and “left unchecked will lead to greater discrimination at work across the economy.”

The UK has taken a “pro-innovation” approach to the regulation of AI. In its recent white paper setting out guidelines for the regulation of AI in the UK, the Department for Science, Innovation and Technology confirmed AI would be regulated on an industry-by-industry basis and by the existing regulators for each industry, with no underlying legislation specific to its use.

The problem with this, says the TUC, comes from a lack of transparency, warning that workers are being left in the dark over how AI is being used, what decisions it is being allowed to make and whether a human is in the decision-making loop.

A survey by the TUC found that 72% of workers were concerned they could see an increase in unfair treatment from AI without careful regulation and 82% support a legal requirement to consult with staff or unions before introducing monitoring.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

AI and the need for human in the loop

The TUC says employers must disclose to workers how AI is being used in the workplace to make decisions about them and that every worker should be entitled to have a human review any decision made by an AI system so they can be challenged on the decision.

The DSIT white paper does include requirements for transparency and explainability as part of the “five clear principles” all regulators should use when monitoring the use of AI technology. It declares that “organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process.”. There is also requirements around fairness, accountability and redress for any mistakes made or harmful decisions generated by AI.

Despite this, the TUC says the guidelines don’t go far enough in ensuring necessary guardrails are in place to safeguard worker rights. It called the white paper a “dismal failure” that only provides vague guidance to regulators on how to ensure AI is used ethically with no capacity or resource to help.

The unions also warn that the Data Protection and Digital Information Bill, being debated in parliament as a post-Brexit replacement for the EU’s GDPR data regime, is a “worrying direction of travel” that further dilutes worker rights and protections. This includes protection against automated decision-making and ensuring workers have a say in the introduction of new technologies through an impact assessment process.

Kate Bell, the TUC’s assistant general secretary, said the government is refusing to put in place the necessary guardrails to stop people from being exploited. “On the one hand ministers are refusing to properly regulate AI,” she said. “And on the other hand, they are watering down important protections through the data bill.  This will leave workers more vulnerable to unscrupulous employers.” 

Nicholas Le Riche, partner at law firm BDB Pitmans said UK employment law has yet to get to grips with AI on the workforce but this will have to change. “Transparency over the use of AI at work is key and since it can be used to determine whether someone gets a job or keeps their job, there are likely to be calls for regulations to ensure that workers have to consent, or at least be consulted with, before its introduction. 

“Similarly, there may need to be legislation which ensures that AI is not the sole decision maker but is always overseen by a human manager who can correct any mistakes or possible bias.  The government seems willing to only provide guidance on the use of AI currently but as AI becomes more and more part of our working lives its unlikely to be too long before something more concrete is needed.”   

Read more: EU could regulate ‘general purpose’ AI like ChatGPT

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.