View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
June 20, 2023updated 21 Jun 2023 9:43am

House of Lords report calls for UK national AI centre and top-down regulation

A national AI centre and embedding 'do no harm' into AI legislation is essential, a new publication from the Lords says.

By Ryan Morrison

A national artificial intelligence centre should be established to bring together existing regulations and ensure new AI regulations can be introduced quickly. That is one of the key recommendations of a new ethical AI report by the House of Lords. One of the report authors, Lord Chris Holmes, told Tech Monitor there is also a need for human-led panels throughout organisations and society to decide on all aspects of AI guardrails and deployment.

Lord Chris Holmes says AI must follow an inclusive development path. (Photo by thephototeam.co.uk)

Produced by the cross-party think tank Policy Connect, the report is called ‘An Ethical AI Future: Guardrails and Catalysts to make Artificial Intelligence a Force for Good’. Launched this week and led by MP Daniel Zeichner, Lord Tim Clement-Jones and Lord Holmes, it sets out ways the country can balance the need for regulation without losing out on the benefits of AI.

In addition to calls for top-down regulation, the report also explains a need for international cooperation and for the introduction of new statutory duties around culture and doing no harm to be baked into legislation. These would then have to be adhered to by any organisation building, deploying or using artificial intelligence.

Prime Minister Rishi Sunak is trying to establish the UK as a global AI safety powerhouse, a place that can lead the world on AI standards, ethics and regulatory approach. This includes the launch of the first AI safety summit set to be held in the UK later this year. He has also called for a global organisation to tackle the regulatory minefield surrounding the tech.

The new Lords report adds to the growing body of evidence around the need for stricter AI regulation. Labour has also called for greater levels of top-down regulation and the EU is imposing rules around the use of copyright data in training foundation AI models. The government’s new Foundation Model AI Taskforce, chaired by Ian Hogarth, has also been tasked with ways to safely deploy large language models across the economy.

Policy Connect formed its conclusions following a series of hearings and research sessions involving industry, academic and third-sector experts. “Harnessing the opportunities needs proper regulation and governance,” it declared. Contributors to the report are said to have stressed the need for an “unambiguous and responsive regulatory environment”.

The need for a UK national AI Centre

The concept of a National AI centre is to bring together existing ethical and research bodies and ensure it has the resources and powers necessary to get things done. “It would build on current coordination work across regulators,” the report authors explained. “This would ensure there are no overlaps and gaps in regulation and allow for directed research to foresee future scenarios” that could impact or change the guardrails, it says.

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

Lord Holmes believes it makes sense to have a single centre as it can take horizontal and vertical views of the entire landscape. The danger of the government approach of leaving it to individual regulators, as described in its recently released AI whitepaper, is that there will be overlap or a lack of a coherent national approach. “You wouldn’t have the additional benefits you get when you have a view across the whole economy, society and regulatory landscape,” he says.

One of the other major recommendations of the report is public panels. They are groups of people across different sectors of society advising on the implementation of the guardrails and helping to steer their development. This, says Lord Holmes, could also be replicated at a company level to ensure consistent and safe use within an organisation – led by a senior executive-level AI appointment.

He described it as a form of “cultural embedding”, or a way to bring humans into the regulatory loop. “We need to look at the human dimensions, leadership and culture around the use of artificial intelligence,” he explains. The aim, he adds, is to “bring people around different points to drive public engagement and discourse, but to get to move forward together with a better sense of understanding.” This mirrors similar panels used in Taiwan across all sectors of the economy to ensure “buy-in” for an idea from the wider population.

AI regulation must be inclusive by design

The idea is to draw on approaches taken with regulatory changes to financial services in the UK, with a reliance on common law principles, broad frameworks and an evidence-led approach to implementing new legislation – rather than the sort of high profile, hard coded rules seen in the past or being deployed elsewhere.

Lord Holmes believes that society needs to be “inclusive by design” when it comes to AI. This goes from the primary school children first learning what an algorithm is, through to the foundation AI labs building the code that could steer the direction of society for the decades to come. “Having that as a golden thread throughout, to be inclusive by design, is when we really start to make significant progress with all of the technology,” he says.

Adam Leon Smith, Chair of Fellows Technical Advisory Group, BCS broadly welcomed the report, particularly the inclusion of a “do no harm” requirement written into statute surrounding the development and deployment of AI. “It also recommends a similar requirement to have a deliberated sign-off in relation to ‘high-risk’ systems, and a nominated responsible person at board level,” Leon Smith says. “This is substantially different to the EU’s draft AI act, which requires third-party conformity assessment in many cases.” 

ISO standards are in development that would provide the sort of overarching AI management systems being proposed in the Lords report. It would be a “significant first step in internationally agreed AI management principles,” he says.

Leon Smith adds: “As the report itself recognises, many companies using AI do not realise the risks they are taking, often they expect their technology providers to be bearing the risk. Before we can really expect organisations to implement strong AI governance, they need training and skills.

“The report also calls for professional registration for the AI industry – I think this needs to cover people under whose authority a system is used, not simply AI experts.” 

Read more: UK AI taskforce: Sunak appoints investor and entrepreneur as chair

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU