The European Union (EU) has started enforcing its AI Act, imposing strict regulations that prohibit artificial intelligence (AI) systems classified as posing an “unacceptable risk.” The legislation, which took effect on 1 August 2024, reached its first compliance deadline on 2 February 2025, granting regulators the authority to enforce bans on AI applications that violate the Act’s risk-based framework.
The AI Act categorises AI systems into four risk levels, each subject to varying degrees of oversight. Minimal-risk AI, such as spam filters and recommendation algorithms, remains largely unregulated, while limited-risk AI, including customer service chatbots, must meet basic transparency requirements. High-risk AI, such as those used in medical diagnostics and autonomous vehicles, faces stricter compliance measures, including mandatory risk assessments.
The most severe category is ‘Unacceptable risk AI’. Models that fall under this definition are now banned. Prohibited applications include AI systems for social scoring, behavioural manipulation, and biometric surveillance in public spaces. AI that attempts to predict criminal behaviour based on physical appearance or detect emotions in workplaces and schools is also outlawed. Companies found deploying these systems in the EU could face penalties of up to €35m or 7% of their annual global revenue, whichever is higher.
In an interview with TechCrunch, UK-based law firm Slaughter and May’s technology head Rob Sumroy said that the fines would not be enforced for some time. “For organisations, a key concern around the EU AI Act is whether clear guidelines, standards, and codes of conduct will arrive in time — and crucially, whether they will provide organizations with clarity on compliance,” said Rob Sumroy. “However, the working groups are, so far, meeting their deadlines on the code of conduct for … developers.”
Industry response and compliance measures
To support early adoption, the European Commission (EC) launched the AI Pact, a voluntary initiative to help businesses align with the AI Act. Over 130 companies, including Amazon, Google, and OpenAI, have joined the pact, pledging to implement transparency and risk assessment measures for AI classified as high-risk.
However, firms such as Apple, Meta, and French AI company Mistral have not signed the pact. Despite this, compliance with the AI Act is mandatory for all businesses operating in the EU. Industry experts suggest that many of the banned AI applications are not widely used commercially, meaning compliance may not require significant operational changes for most companies.
Certain exceptions exist for law enforcement and public safety applications. AI-driven biometric surveillance may be permitted under specific conditions, such as aiding in the search for missing persons or preventing imminent security threats. These uses require prior authorisation and cannot serve as the sole basis for legal action. Emotion-detection AI may also be allowed in workplaces and schools when used for medical or safety purposes, such as assisting individuals with communication disorders.
The EC has established an AI Office to oversee compliance, provide regulatory guidance, and monitor enforcement. The AI Pact remains open for participation, offering companies access to best-practice frameworks, industry webinars, and compliance roadmaps.
While the AI Act will be fully implemented by August 2026, the February 2025 compliance deadline marks a critical step in the EU’s AI regulation efforts. Additional guidelines, based on consultations with industry stakeholders, are expected later in 2025 to clarify enforcement mechanisms and regulatory expectations.