The EU AI Act, the first global regulation on artificial intelligence, has officially come into force.
The legislation aims to ensure that AI systems developed and used within the European Union (EU) meet safety and ethical standards to protect fundamental rights. It seeks to harmonise the internal AI market across the EU, encouraging the adoption of AI technologies and fostering an environment for innovation and investment.
European Commission Europe Fit for the Digital Age Executive Vice-President Margrethe Vestager said: “AI has the potential to change the way we work and live and promises enormous benefits for citizens, our society and the European economy.
“The European approach to technology puts people first and ensures that everyone’s rights are preserved. With the AI Act, the EU has taken an important step to ensure that AI technology uptake respects EU rules in Europe.”
Categories of AI systems
The AI Act categorises AI systems into four levels of risk, each with specific compliance requirements.
Minimal Risk AI Systems include applications such as AI-enabled recommender systems and spam filters, which pose minimal risk to rights and safety. These systems are exempt from AI Act obligations, though companies may voluntarily adopt additional codes of conduct.
Specific Transparency Risk AI Systems include AI systems like chatbots, which must disclose their non-human nature to users. This category also requires AI-generated content, such as deep fakes, to be clearly labelled.
Users must be informed when biometric categorisation or emotion recognition systems are in use. Providers must ensure synthetic content is marked in a machine-readable format to indicate it is artificially generated or manipulated.
High Risk AI Systems, including those used in recruitment or loan assessments, must meet rigorous requirements. These include implementing risk-mitigation systems, ensuring high-quality datasets, maintaining logs of activity, providing detailed documentation, offering clear user information, ensuring human oversight, and maintaining levels of robustness, accuracy, and cybersecurity.
Unacceptable Risk AI Systems, which pose a clear threat to fundamental rights, will be prohibited. This includes systems that manipulate human behaviour, encourage dangerous actions among minors, enable ‘social scoring’ by governments or private entities, and certain applications of predictive policing.
The AI Act also addresses general-purpose AI models, which perform a range of tasks such as generating human-like text. These models will be subject to transparency requirements to manage systemic risks.
European Commissioner for Internal Market Thierry Breton said: “With the entry into force of the AI Act, European democracy has delivered an effective, proportionate and world-first framework for AI, tackling risks and serving as a launchpad for European AI startups.”
Enforcement and advisory bodies
The enforcement of the AI Act will be overseen by the Commission’s AI Office, which will operate at the EU level. Member States are required to designate national competent authorities by 2 August 2025 to ensure compliance and conduct market surveillance.
Three advisory bodies will support the AI Act’s implementation. The European Artificial Intelligence Board will ensure uniform application and facilitate cooperation between the Commission and Member States.
A scientific panel will provide technical advice and issue alerts about risks associated with general-purpose AI models. An advisory forum of diverse stakeholders will offer additional guidance.
Penalties for non-compliance and implementation timeline
Fines for non-compliance can be significant, with penalties up to 7% of global annual turnover for breaches involving prohibited AI applications, up to 3% for other violations, and up to 1.5% for providing incorrect information.
Most of the AI Act’s rules will apply from 2 August 2026, while prohibitions on AI systems deemed to present unacceptable risks will take effect after six months. Regulations for general-purpose AI models will start after 12 months.
To bridge the transitional period, the Commission has introduced the AI Pact, encouraging AI developers to voluntarily adopt key obligations ahead of the legal deadlines.
Broader policy measures
This legislative action is part of a broader initiative to support the development of trustworthy AI within the EU, which also includes the AI Innovation Package and the Coordinated Plan on AI. These measures aim to ensure AI technologies are safe and respect rights while enhancing AI adoption, investment, and innovation across the EU.
Background developments of the European AI Act
In December 2023, the Commission secured political agreement on the AI Act. The following month, measures were introduced to support European startups and SMEs in developing trustworthy AI.
In May 2024, the Commission unveiled the AI Office. The amended EuroHPC JU Regulation, effective from last month, facilitates the establishment of AI factories for training general-purpose AI models.