Governments must work together on a global approach to the regulation of artificial intelligence, the UK’s foreign secretary will tell the UN Security Council (UNSC) today. James Cleverly is chairing a session of the UNSC that will focus on the impact of AI.
Though Cleverly is set to push the need for a coordinated response to technologies like OpenAI’s GPT-4 large language AI model, which has powered the development of popular tools such as ChatGPT, the UK’s stated approach to AI regulation differs markedly from other regions, particularly the EU, with fewer guardrails for users being proposed.
James Cleverly on AI: ‘coordinated action’ required
The briefing in New York will discuss the potential implications of AI on international peace and security and how to promote its safe and responsible use. Cleverly is chairing the session because the UK holds the presidency of the UNSC this month.
Cleverly is expected to say: “No country will be untouched by AI, so we must involve and engage the widest coalition of international actors from all sectors.
“Momentous opportunities – on a scale that we can barely imagine – lie before us. We must seize these opportunities and grasp the challenges of AI – including those for international peace and security – decisively, optimistically and from a position of global unity on essential principles.”
The session will also hear from António Guterres, secretary-general of the United Nations, Jack Clark, co-founder of AI company Anthropic, and Professor Zeng Yi, director of the Cognitive Intelligence Lab and co-director of the China-UK Research Center for AI Ethics and Governance.
Is the UK’s approach to AI regulation in line with other countries?
The UK government’s initial approach to AI regulation, as detailed in a white paper released earlier this year, was to try and create a light-touch, pro-innovation environment for AI companies, with no overarching regulator to look at the technology. This was seen by many in the industry as a risky approach and one that stands in stark contrast to that being pursued by the EU, which has laid down rules for general-purpose AI in its AI Act.
However, there have been signs that Rishi Sunak’s government is evolving its standpoint and putting added emphasis on AI safety, a topic Sunak himself discussed with US President Joe Biden when they met in Washington in June. The UK will convene a global AI summit later this year, which aims to gather experts from around the world to discuss how tools like ChatGPT can be used responsibly.
The UK’s Labour Party has described the government’s approach as “failing to keep up with the pace” of AI development and says workers will be disadvantaged by the regulatory regime set out in the white paper. However, other countries, such as the US, are also pursuing a relatively relaxed environment for AI, with Japan the latest to set out its proposals, which would give companies the freedom to develop new systems, particularly in healthcare.
Speaking to Tech Monitor earlier this month, Paul Barrett, deputy director of the Center for Business and Human Rights at NYU Stern School of Business, said he expects developers to fall in line with the EU approach, which could become the global standard for AI in a similar way to that which GDPR has for privacy.
“Even if the US and UK fail to emulate the EU, I expect that major producers of AI apps and other products will conform to EU standards because they won’t want to lose out on the lucrative European market and it will prove inefficient to offer different versions of their products in different parts of the world,” Barrett said.