View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

What is the EU AI Act?

The pioneering legislation regulating AI is one step closer to becoming law. This is what you need to know about the EU AI Act.

By Livia Giannotti

The EU AI Act is the world’s first comprehensive set of rules for guardrailing AI. On 13 March, it was passed by lawmakers at the European Parliament in what is a crucial step towards the implementation of the landmark regulations in member states.

A phone screen shows ChatGPT and a European flag is in the background to show the EU AI Act
The Parliament’s approval of the legislation is a significant milestone in establishing and implementing the EU AI Act. (Photo by Ascannio/Shutterstock)

The legislation was first proposed in April 2021 “to harmonise rules on artificial intelligence”. Since then, the major advances of generative AI that saw ChatGPT break into the mainstream have added a layer of urgency to the need for such regulations. 

But while the rise of generative AI has been a catalyst for the Act’s development, it has also been a significant obstacle. Lawmakers had to add provisions around the new applications of generative AI, and member states caused complications as they worried about competition with less strictly regulated AI developers outside the EU. 

Still, several months of negotiations culminated in a provisional agreement over the final wording of the Act reached by the Council and the European Parliament in December 2023. Two of the institution’s most important committees, the Committee on Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE) also endorsed the draft in February 2024. 

The Parliament’s approval of the legislation is a significant milestone in establishing and implementing the EU AI Act. 

Why is an AI Act necessary?

Advances in AI research have always raised concerns over potential machine takeover à la Terminator. However, it seems like today, those worries have shifted to more tangible – and current – ones.

Applications of AI systems are increasingly present across sectors, from education to healthcare, and generative AI has become easily accessible to the public. 

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

Along with the advances come the increase in threats they could pose. Some recurrent risks of AI are the effects that poorly coded models can have on surveillance, discrimination, copyright infringement and misinformation. And those risks have become all the more threatening with the increased AI applications to vital services.

A blog post by the European Parliament explains that the main goal of the EU AI Act is “to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly”. To “prevent harmful outcomes”, the Parliament believes that “AI systems should be overseen by people, rather than by automation.”

What regulations are in the EU AI Act?

The Act takes a risk-based approach to AI regulations. AI systems and applications have been classified according to four different levels of risk they could pose to humanity and society. The higher the risk, the stricter the limitations on its usage will be. 

Minimal-risk AI systems

Most AI systems used in the EU present what has been identified by lawmakers as a minimal risk to the people. The Act allows free use of those systems, which include applications such as AI-powered recommendation systems and video games, spam filters and inventory management systems.

Limited-risk AI systems

Risks regarding the lack of transparency around the presence of the technology are considered as limited risks. Under the EU AI Act, all systems will have to make it clear to users when they are operated by AI and will have to comply with EU copyright law. These measures are aimed at fighting misinformation associated with some of the content produced by AI chatbots and deepfakes or with copyrighted content used to train models without relevant credits. 

High-risk AI systems

Systems that pose a high risk for humanity and society include AI applications in sectors such as medicine, transportation, education, law enforcement and employment. More specifically, those can include systems built to score exams, assist with surgeries, sort CVs for recruitment, evaluate evidence for court rulings or operate public services. 

Because of the significant responsibilities given to such systems, they will “be subject to strict obligations before they can be put on the market”. The European AI Office, which is set to enforce the measures, will scrutinise the quality and fairness of datasets used to train the models and ensure that activity is traceable – among other requirements.

Unacceptable-risk AI systems

Some systems have been judged to pose an unacceptable risk to fundamental rights, society or humanity as a whole. “All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourage dangerous behaviour,” the Act’s policy document specifies. 

Exemptions 

Several AI safety organisations have voiced concerns over law enforcement agencies’ exemptions from some of the measures provided by the Act. For example, all biometric identification systems are considered high-risk, but exceptional uses by law enforcement agencies will be allowed, to prevent imminent threats to public safety. However, such usages remain “subject to authorisation by a judicial or other independent body”.

For Daniel Leufer, a senior policy analyst at Access Now, those exemptions are not in line with the Act’s principle of fundamental rights protection, and the legislation is “so full of loopholes” that it will fail to regulate “some of the most dangerous uses of AI”.

When will the EU AI Act become law?

The Members of European Parliament endorsed the regulation with 523 votes in favour, 46 against and 49 abstentions. Still, the text is subject to minor checks and approvals before it can be made law. The next step is a final lawyer-linguist check before the European Council formally approves the legislation, expected by May 2024.

The law will be “fully applicable” 2 years after the final approvals, as stated in the policy document. However, prohibitions will be enforced after only six months and most obligations will apply after 12 months. The obligations for high-risk systems will be applicable 3 years after entry into force.

In the meantime, the Commission has initiated a voluntary AI Pact that encourages developers to implement the Act’s measures as early as possible.

What happens if a company breaches the EU AI Act?

If an AI developer or company violates the Act’s standards, the European AI Office will enforce fines, which will be “proportionate and dissuasive,” according to Article 71 of the Act. Those will range from €35m or 7% of the company’s global turnover to €7.5m or 1.5% of turnover.

What does the EU AI Act mean for the future of AI?

The main concern around the EU AI Act is the same that has accompanied every piece of AI regulation in the world: that legislation could hinder innovation. The issue has been voiced notably by France, Germany and Italy, when they stalled negotiations in November 2023 out of concern for competition with firms outside of the EU jurisdiction.

AI companies big and small raised similar concerns, though generally remain publicly in favour of the measures. 

If the measures are found to be effective for guardrailing AI without stifling innovation, the Act could prompt more countries to adopt a similar approach. “The AI Act is much more than a rulebook – it’s a launchpad for EU startups and research to lead the global race for trustworthy AI,” EU commissioner Thierry Breton said in a statement.

The UK, for example, chose a pro-innovation approach, especially by affirming no strict AI regulations will come in the way of AI development in the near future, and the measures approved will not become legally binding. China and the US have put regulations in place, but none are as advanced – or firm. However, the Act could come to show that regulation and innovation aren’t as opposed as it seems.

“We managed to find that very delicate balance between the interest to innovate and the interest to protect,” Romanian MEP Dragoş Tudorache told journalists. 

Read more: Tackling industry challenges with AI and the cloud at Cloud Expo Europe 2024

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU