View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Comment
May 19, 2023updated 29 May 2023 9:39am

The EU AI Act is improving – but still contains fundamental flaws

The draft legislation is more realistic, say AI researchers - but might still suppress sectoral innovation before it can truly flourish.

By Professor Dr. Philipp Hacker LLM, Dr. Andreas Engel LLM and Amelie Berz

As our legislators and societies navigate the rapidly evolving space of artificial intelligence (AI), they have to carefully balance the need to safeguard innovation in the space against any potential risks or ethical concerns that might arise. When it comes to generative AI specifically, any framework should therefore factor in the technology’s ability to create content, its capacity to learn and adapt, and its aptitude for user interaction. As such, new legislation should include respect for data protection, mandate sufficient transparency in operations and hold firms accountable for errors and harms, while keeping generative AI systems operationalisable for companies large and small. 

The EU has undoubtedly taken the lead in defining legislative guardrails for AI. In April 2021, the European Commission proposed the first-ever comprehensive legal framework for the technology: the Artificial Intelligence Act (AI Act). The proposal, which is underpinned by a ‘risk-based’ approach, classifies AI systems based on their potential to harm rights and safety. Consequently, high-risk AI systems, such as those operating in the medical or administrative sector, would be subjected to stringent regulations.

AI Act
The EU Parliament building at dusk. Its recent deliberations over the EU AI Act have resulted in significant improvements, according to a trio of European AI researchers – but the legislation as written still threatens to accidentally suppress innovation in the space. (Photo by Thierry Monasse/Getty Images)

Foundation models in the EU AI Act

Last Thursday, the draft law was approved by the relevant committees within the European Parliament (EP.) Not only did MEPs introduce the specific term ‘foundation model’ into the legislation – a term that has gained considerable traction across the computer science community – but they also supported three levels in the regulation of foundation models, including generative AI, as suggested in our recent paper on the subject, ‘Regulating ChatGPT.’ These levels include, first, minimum standards for all foundation models; second, specific rules for using foundation models in high-risk scenarios; and, third, rules for collaboration and information exchange along the AI value chain. 

In general, the rules for generative AI in the draft legislation, including transparency concerning use, training data, and copyright, as well as content moderation, are a step in the right direction. However, significant problems persist. For one thing, the definition of AI itself in the Act still seems, in our view, excessively broad, including any ‘machine-based system that is designed to operate with varying levels of autonomy’ capable of ‘generating outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.’ 

That potentially covers swathes of technology irrelevant to the Act, including smart meters, planning assistants, rule-based systems and almost any advanced software. The concept of autonomy in the legislation is also excessively wide, omitting to mention that models should have a certain ability to learn or adapt to new environments. Under this definition, an electric toothbrush mechanically shaking its brush over its user’s enamel could, conceivably, being categorised as ‘autonomous’.

Most importantly, the demand for risk assessment, mitigation, and management for all foundation models will prove daunting for small and medium-sized enterprises (SMEs) developing these systems. With limited compliance resources, they won’t be able to consider the overblown number of hypothetical risk scenarios and implement the associated risk management system. Arguably, only big tech companies will muster the resources to meet these requirements, helping to solidify their dominance in the space while simultaneously driving the industry’s core activities beyond the EU’s borders.

The ‘ChatGPT Rule’, known to the initiated as Art. 28b(4) AI Act EP Version, is also flawed. While its transparency obligations go in the right direction, not least in making AI service providers establish a clear understanding among users that they are dealing with an AI system, the legislation should also be imposing at least some duties on those generating AI content online, not least to help combat the spread of fake news and other misinformation. The call for transparency rights in the legislation should also extend to professional users and within social media contexts. Conversely, non-professional users outside social media context could be exempted, since addressees would have no legitimate interest in knowing about AI involvement in, for example, writing a birthday card.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester
Regulators are exploring ways to tackle bias and other issues with tools like ChatGPT (Tada Images/Shutterstock)
The EU AI Act is the first, comprehensive attempt at legislative regulation of all things artificial intelligence. Recent advances in generative AI, however, have complicated internal deliberations over the incoming law. (Photo by Tada Images/Shutterstock)

Copyright woes

Compliance with EU law is mandatory – the AI Act EP Version re-affirms this principle, while introducing cautious ex-ante compliance duties. But this provision could be more robust. In our view, the mechanisms of the Digital Services Act (DSA) should be incorporated to give a clear, actionable framework, such as mandatory notice and action mechanism and trusted flaggers. These measures would decentralise control over AI output, solidify adherence to the law, and ensure a safer AI ecosystem.

Article 28b(4)(c) of the AI Act also deals with copyrighted material in training data, the existence of which must be disclosed. While a commendable idea, this provision is fraught with challenges. The question of what constitutes copyrightable material is often disputed among experts, while conducting due diligence along these lines will inevitably prove daunting for developers processing vast amounts of data. A potentially over-inclusive disclosure that also covers works of uncertain copyright status should suffice. This approach would prevent exorbitant due diligence costs and place the onus of copyright dispute on the individual authors – who may then decide if they believe their work is copyrightable and what course of action to take.

Overall, we believe that the draft legislation is heading in the right direction – but these deficiencies still threaten to derail generative AI development in the EU and beyond. Ultimately, risk management must be clearly use-case-specific and application-oriented to prevent the Act from becoming an impediment to AI design and deployment in Europe. Our collective aim should be to strike the right balance between protecting individuals and society from potential harms, and allowing the AI industry to innovate and grow, within meaningful guardrails – in the EU and beyond.

Read more: This is how GPT-4 will be regulated

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.