View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Urgent need for AI workers’ rights legislation, TUC warns

The union is working with a number of organisations across academia, tech and government to draft new AI legislation.

By Ryan Morrison

Urgent new legislation is needed to protect workers from AI and ensure the technology “benefits all”, the UK’s Trades Union Congress (TUC) has warned as it launches a new task force to tackle the “gap in legislation”. The union body says the UK employment sector could become a “Wild West” without rapid changes. This warning comes after a committee of MPs also called for faster action on AI legislation.

The TUC has urged the government to ensure humans are involved in any decisions about people made by AI (Photo: Gorodenkoff/Shutterstock)
The TUC has urged the government to ensure humans are involved in any decisions about people made by AI. (Photo by Gorodenkoff/Shutterstock)

The TUC’s new task force includes academics, lawyers, politicians and technologists and has been designed to “fill the gap” in employment law. This will consist of drafting new proposed legal protections to ensure AI is regulated fairly at work for the benefit of both employees and employers. Referred to as the AI and Employment Bill, the group plans to publish its ideas early next year and then begin the lobbying process to have it included in UK law, which isn’t likely to happen until after the next general election.

It includes representatives from the tech trade group techUK, The Chartered Institute of Personnel and Development; BCS, the Chartered Institute for IT; AI policy group the Ada Lovelace Institute; and many unions and academic institutions. Four MPs will also sit on the committee; Conservative David Davis, Labour’s Darren Jones and Mick Whitley and SNP representative Chris Stephens. 

The new task force will be jointly chaired by TUC assistant general secretary Kate Bell and Gina Neff, executive director of the Minderoo Centre for Technology and Democracy at the University of Cambridge. In a statement, the pair said the UK was “way behind the curve” on the regulation of AI and that UK employment law was failing to keep pace with the development of new technologies. This was also leaving employers uncertain on how best to “fairly take advantage of the new technologies”.

AI is already being widely used across different sectors of the economy, with automated systems being deployed for tasks from sorting through CVs to biometric analysis of potential candidates to assess suitability. However, the TUC says employers are often buying AI systems without fully knowing the implications on workers. 

The task force is likely to build on existing calls from the TUC for protections to be enshrined in law around the use of AI by employers. These include requiring employers to consult with trade unions about the use of the most high-risk and intrusive forms of AI. They also call for a legal right for all workers to have a human review the decisions made by AI.

The TUC has also urged the government to update the UK GDPR legislation and its replacement, the Data Protection and Digital Information Bill, as well as the Equality Act to guard against discriminatory algorithms. It hopes this will all be addressed at the upcoming AI Safety Summit in November. 

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

“AI is already making life-changing decisions about the way millions work – including how people are hired, performance-managed and fired,” Bell said. “But UK employment law is way behind the curve – leaving many workers vulnerable to exploitation and discrimination.”

Neff said laws must be fit for purpose and ensure that AI works for all. Speaking of the upcoming summit, she added: “AI safety isn’t just a challenge for the future and it isn’t just a technical problem. These are issues that both employers and workers are facing now, and they need the help from researchers, policymakers and civil society to build the capacity to get this right for society.”

AI deployments and the need for speed

The warnings from the TUC and the new task force come soon after parliament’s Department for Science, Innovation and Technology select committee launched its long-awaited AI regulation report. It has been holding hearings and investigating the implications of AI particularly generative AI like OpenAI’s ChatGPT.

In the report, the MPs reject the need for a pause on the development of next-generation foundation AI models but urge the government to speed up legislation. “Without a serious, rapid and effective effort to establish the right governance frameworks – and to ensure a leading role in international initiatives – other jurisdictions will steal a march and the frameworks that they lay down may become the default even if they are less effective than what the UK can offer.”

“We urge the government to accelerate, not to pause, the establishment of a governance regime for AI, including whatever statutory measures as may be needed,” it concludes.

Speaking to Tech Monitor, Nicholas Le Riche, partner at law firm BDB Pitmans, which is not involved in the new task force, says the government only seems willing to provide guidance on the use of AI, rather than specific legislation. However, “as AI becomes more and more part of our working lives it’s unlikely to be too long before something more concrete is needed.”

“Transparency over the use of AI at work is key and since it can be used to determine whether someone gets a job or keeps their job, there are likely to be calls for regulations to ensure that workers have to consent, or at least be consulted with, before its introduction,” Le Riche adds. “Similarly, there may need to be legislation which ensures that AI is not the sole decision maker but is always overseen by a human manager who can correct any mistakes or possible bias.”

A Government spokesperson told Tech Monitor it already has its own taskforce to bring together government and industry to develop safe and reliable use of AI. “AI is set to fuel growth and create new highly-paid jobs throughout the country while allowing us to carry out our existing jobs more efficiently and safely,” they said. “Our pro-innovation, context-based approach to AI regulation will boost investor confidence, and help create these new jobs, while allowing any issues related to AI to be scrutinised in the specific context they arise in by our world-leading regulators.”

Read more: Beware large language model cybersecurity risks – NCSC

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU