View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

The state of AI regulation around the world

With AI presenting increasingly real threats, the need for regulation is growing – but how?

By Livia Giannotti

Only a few months ago, it still seemed the tension between regulating AI and pushing for its innovation originated from governments on one side and tech companies on the other. 

But as the AI sector is expanding, so is the corresponding legal framework. While AI companies are increasingly using AI safety as a selling point, governments are now becoming reluctant to impose guardrails that would stifle innovation and global competition.

The impact of AI companies in the world is growing but regulation is still a work in progress. (Photo by Ascannio/Shutterstock)

And the stakes in getting it right are rising with every passing month. While research into AI has led to concurrent advances in fields including healthcare, finance and education, critics are warning that whole areas of the global economy could be automated, eliminating jobs and human agency over a wide range of vital processes. That isn’t even mentioning the impact that poorly coded or malign AI programs might have on labour relations, misinformation and surveillance. 

The potential impact of AI and tech companies on the world is growing, but policies and strategies to keep it under control differ from one area of the world to another. While the main government efforts to regulate AI are still a work in progress, here is a breakdown of the most fully fledged regulations around the world.

European Union: risk-based legislation

On 9 December, the European Union agreed on the first-ever legal framework for AI regulation, the EU AI Act. As the first set of legally binding rules, the AI Act is a historic resolution: the measures, which are currently the most restrictive worldwide, will be starting to be made law in June 2024 for all member states.

The EU has adopted a risk-based approach. This means that regulations will be enforced on AI systems depending on the level of risk they present for humanity. AI applications that are considered to present a minimal risk – for example, AI-powered recommendation systems – will not be subjected to mandatory rules. 

Other systems, such as those largely used in the fields of medicine, education or even hiring are considered high-risk and will be regulated by “strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity.” All AI systems will have to clearly state with the user that they are powered by AI.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

Finally, systems such as social scoring and emotion-sensing devices in the workplace have been judged as presenting an “unacceptable risk”. Under the incoming law, they will be completely banned, as they are deemed to affect fundamental rights, and engage in biased and discriminatory behaviours.

While the AI Act is the most advanced legislation in terms of regulations, some fear that it could also hinder European companies from competing with other countries. However, EU commissioner Thierry Breton declared that “the AI Act is much more than a rulebook – it’s a launchpad for EU startups and researchers to lead the global race for trustworthy AI.”

UK: a pro-innovation approach

British Prime Minister Rishi Sunak has made it clear that he wants to make the UK a science superpower. However, while the EU considers regulation a part of the AI-growth equation, the UK has decided to adopt a “pro-innovation” approach that eschews new and complex guardrails for the technology. 

In the policy paper ‘A pro-innovation approach to AI regulation‘, the UK government laid out the details of its plan for AI governance. There are two key elements to this approach. First, existing regulators are responsible for interpreting and implementing the UK’s core AI principles, namely safety, transparency, fairness, accountability and contestability. These watchdogs, in turn, are encouraged to align their thinking on all things AI with newly defined “central functions”, a framework to create a common understanding of AI risks. 

As such, the UK government’s minister for AI and intellectual property, Viscount Camrose, declared that the UK will not pass any law on AI regulation soon. However, this has not precluded efforts to define international norms for the development and governance of AI systems. To that end, the UK government hosted the International Summit for AI Safety in November 2023. The gathering resulted in the Bletchley Declaration, an agreement signed by 28 countries including the US and China. The declaration aims to establish a shared understanding of AI risks and emphasise the need to “collectively manage” them through international collaboration.

According to one leading AI researcher, while the UK and the EU have different approaches to governance, the fundamental principles of their strategies are strikingly similar. “[They] both intend to build public trust in intelligent tech by mitigating AI perils whilst encouraging entrepreneurship and innovation in the field,” says Alina Patelli, a senior lecturer in computer science at Aston University. 

China: a focus on generative AI

As its tech industry has been at the forefront of AI development, China was one of the first countries to enact relevant regulations, starting in 2021 and asserting itself as a pioneer in the sector of AI legislation.

Chinese regulations mainly revolve around the use of generative AI, notably deepfakes. Control over AI-powered recommendation systems is also at the core of Chinese AI laws, with measures such as a ban on dynamic pricing and a strict transparency requirement for users who interact with AI-generated content. Despite those regulations, some experts have noted that the country remains open to innovation. In a paper for the governance research organisation Brookings, AI regulation expert Mark MacCarthy explained that China’s AI governance effectively found ways to gain safety protection “without losing the spur of innovation.”

However, these regulations are mostly directed towards private companies – and less towards the state – and depend on reflecting Chinese “socialist core values”. In fact, remarks have been made that China’s AI rules “are more about ensuring state control than protecting users.” 

While the focus of the EU’s and UK’s strategy is the safety of individual users (although within the boundaries of industry growth) the Chinese strategy is based on maintaining social order, especially by tackling misinformation – and disinformation. 

US: sector-specific measures

The US is home to the biggest tech companies in the world. With that comes a level of responsibility, or as put by the White House, initiatives to “make automated systems work for the American people”.

To do so, the US is opting for a sector-specific approach, characterised by the Biden administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, published by the White House in October 2023. The initiative stipulates that major AI developers should assess and notify “their algorithms’ potential threats to national security”, including the data they train their models with, explains Patelli. 

The executive order “also puts in place incentives to promote innovation and competition by attracting international talent and upskilling the domestic workforce,” Patelli says. Additionally, she explains, “Social risks, such as discrimination in AI-based hiring, housing allocation, and sentencing, are mitigated by requiring the relevant secretary to publish guidance on how federal authorities should oversee the use of AI in those fields.”

Biden’s order shows a particularly low degree of state intervention in the AI business, notably driven by the will for industry growth and by the power of the tech industry in the country. The US’s permissive approach to AI regulation – which is naturally beneficial for businesses – has been observed in more than one instance, including Biden’s voluntary AI governance scheme and meetings between AI leaders and the White House.

Global cooperation: is it a technocrat’s world?

The effectiveness of national efforts to regulate AI will be proved over time, especially because their limitations are still unclear. Patelli believes that “all major regulatory efforts promote ethical, safe, and trustworthy AI. [But] they also face similar challenges.”

She told Tech Monitor that one current limitation of AI regulation is that “some of the key terminology is vaguely defined and mostly reflects the input of technocrats,” calling for better representation of “general public needs”. The close relationships between governments in the UK or the US and big tech companies – from leaders’ meetings to leniency in favour of innovation – make it difficult to truly discern efforts that would ensure safety for all.

General public needs would also benefit from more global cooperation as geopolitical competition hinders governments’ efforts to implement stricter rules. As the EU, the UK, China and the US all want to write the rules of AI, it seems like the balance between innovation and regulation is crucial. 

“With AI informing financial investments, underpinning national healthcare and social services, and influencing the way we consume online content, whoever sets the regulatory gold standard also has the ability to shift the global balance of power,” Patelli says. “Replacing toxic competition with international collaboration is key.”

Read more: UK AI regulation must be stronger to match global tech leadership ambitions

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU