A survey by Deloitte has found that 48.6% of German companies have not seriously engaged with the implementation of the European Union’s Artificial Intelligence Act (EU AI Act), which came into force in August 2024. Only 26.2% of companies, meanwhile, have started actively preparing for the new regulatory requirements, indicating a significant gap in readiness among German businesses.
The survey, which involved 500 managers engaged in AI within their companies, highlights that 35.7% feel well prepared to meet the AI Act’s requirements, while another 30% are only partially prepared. In contrast, 19.4% of respondents consider themselves poorly prepared, and 14.9% remain undecided.
The EU AI Act, aimed at ensuring AI safety and ethical standards, categorises AI systems by risk levels—Minimal, Specific Transparency, High, and Unacceptable—each with specific compliance requirements.
Deloitte analysis reveals German corporate reluctance over EU AI Act
High-risk AI systems, such as those used in recruitment, must meet rigorous standards, while systems posing an unacceptable risk are banned. Enforcement will involve significant penalties, up to 7% of global annual turnover for severe violations, with most rules becoming enforceable from August 2026.
The results of Deloitte’s survey indicate that 52.3% of companies believe the AI Act will constrain their innovation capabilities in AI, reflecting concerns that the regulation may create obstacles rather than opportunities. Only 18.5% expect the new regulation to have a positive impact on innovation, while 24.2% are neutral. Similarly, 47.4% see the AI Act as more of a hindrance to developing AI-based applications, while only 24.1% believe the new regulations will aid in their development and introduction.
Despite the urgency, only a small fraction of companies have taken concrete steps to comply with the AI Act. According to the survey, 7.5% have set up a dedicated task force, 9.1% have assigned the task to a specific department, and 17.6% have initiated a project focusing on the new requirements. However, 53.8% of companies have not implemented any of these measures, reflecting a widespread lack of proactive engagement.
“The fact that around half of companies have not yet worked intensively on preparing for implementation reflects the fact that many companies in Germany do not yet have the topic of AI on their agenda,” said Deloitte partner and digital and AI ethics lead, Sarah Becker. “On the other hand, the EU AI Act has made its way into German boardrooms like hardly any other regulation to date. Particularly in highly regulated industries such as the financial sector or the healthcare sector, German companies are used to regulatory requirements being incorporated into compliance processes and systems and becoming an integral part of their framework conditions for innovation.”
German companies anticipating trust, legal speed bumps for AI
The survey also shows a divided outlook on whether the AI Act will bring more legal certainty in dealing with AI. About 39% of respondents expect the regulation to provide greater legal clarity, but 35% disagree, while 26.3% remain undecided.
Similarly, on the topic of trust in AI, 34.9% of companies believe that the AI Act will lead to increased trust in AI technologies, while 30.8% do not foresee any positive impact. Another 34.3% remain undecided, reflecting mixed feelings within the industry about the regulation’s potential benefits.
A related global study by Deloitte, ‘State of GenAI in the Enterprise,’ which surveyed nearly 2,800 managers, including 150 from German companies, highlights similar challenges. The report shows that compliance with regulations, risk management, and the lack of a governance model are the most significant barriers to AI adoption.
Globally, only 23% of companies feel adequately prepared to manage risks, governance, and regulatory issues, underscoring the need for more robust frameworks. The report also finds that scaling AI projects remains a challenge, even as companies increase investment in data management to address issues like data security and quality. Demonstrating a clear return on investment for Generative AI (GenAI) projects is a common concern, found the study.