As our legislators and societies navigate the rapidly evolving space of artificial intelligence (AI), they have to carefully balance the need to safeguard innovation in the space against any potential risks or ethical concerns that might arise. When it comes to generative AI specifically, any framework should therefore factor in the technology’s ability to create content, its capacity to learn and adapt, and its aptitude for user interaction. As such, new legislation should include respect for data protection, mandate sufficient transparency in operations and hold firms accountable for errors and harms, while keeping generative AI systems operationalisable for companies large and small.
The EU has undoubtedly taken the lead in defining legislative guardrails for AI. In April 2021, the European Commission proposed the first-ever comprehensive legal framework for the technology: the Artificial Intelligence Act (AI Act). The proposal, which is underpinned by a ‘risk-based’ approach, classifies AI systems based on their potential to harm rights and safety. Consequently, high-risk AI systems, such as those operating in the medical or administrative sector, would be subjected to stringent regulations.
Foundation models in the EU AI Act
Last Thursday, the draft law was approved by the relevant committees within the European Parliament (EP.) Not only did MEPs introduce the specific term ‘foundation model’ into the legislation – a term that has gained considerable traction across the computer science community – but they also supported three levels in the regulation of foundation models, including generative AI, as suggested in our recent paper on the subject, ‘Regulating ChatGPT.’ These levels include, first, minimum standards for all foundation models; second, specific rules for using foundation models in high-risk scenarios; and, third, rules for collaboration and information exchange along the AI value chain.
In general, the rules for generative AI in the draft legislation, including transparency concerning use, training data, and copyright, as well as content moderation, are a step in the right direction. However, significant problems persist. For one thing, the definition of AI itself in the Act still seems, in our view, excessively broad, including any ‘machine-based system that is designed to operate with varying levels of autonomy’ capable of ‘generating outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.’
That potentially covers swathes of technology irrelevant to the Act, including smart meters, planning assistants, rule-based systems and almost any advanced software. The concept of autonomy in the legislation is also excessively wide, omitting to mention that models should have a certain ability to learn or adapt to new environments. Under this definition, an electric toothbrush mechanically shaking its brush over its user’s enamel could, conceivably, being categorised as ‘autonomous’.
Most importantly, the demand for risk assessment, mitigation, and management for all foundation models will prove daunting for small and medium-sized enterprises (SMEs) developing these systems. With limited compliance resources, they won’t be able to consider the overblown number of hypothetical risk scenarios and implement the associated risk management system. Arguably, only big tech companies will muster the resources to meet these requirements, helping to solidify their dominance in the space while simultaneously driving the industry’s core activities beyond the EU’s borders.
The ‘ChatGPT Rule’, known to the initiated as Art. 28b(4) AI Act EP Version, is also flawed. While its transparency obligations go in the right direction, not least in making AI service providers establish a clear understanding among users that they are dealing with an AI system, the legislation should also be imposing at least some duties on those generating AI content online, not least to help combat the spread of fake news and other misinformation. The call for transparency rights in the legislation should also extend to professional users and within social media contexts. Conversely, non-professional users outside social media context could be exempted, since addressees would have no legitimate interest in knowing about AI involvement in, for example, writing a birthday card.
Copyright woes
Compliance with EU law is mandatory – the AI Act EP Version re-affirms this principle, while introducing cautious ex-ante compliance duties. But this provision could be more robust. In our view, the mechanisms of the Digital Services Act (DSA) should be incorporated to give a clear, actionable framework, such as mandatory notice and action mechanism and trusted flaggers. These measures would decentralise control over AI output, solidify adherence to the law, and ensure a safer AI ecosystem.
Article 28b(4)(c) of the AI Act also deals with copyrighted material in training data, the existence of which must be disclosed. While a commendable idea, this provision is fraught with challenges. The question of what constitutes copyrightable material is often disputed among experts, while conducting due diligence along these lines will inevitably prove daunting for developers processing vast amounts of data. A potentially over-inclusive disclosure that also covers works of uncertain copyright status should suffice. This approach would prevent exorbitant due diligence costs and place the onus of copyright dispute on the individual authors – who may then decide if they believe their work is copyrightable and what course of action to take.
Overall, we believe that the draft legislation is heading in the right direction – but these deficiencies still threaten to derail generative AI development in the EU and beyond. Ultimately, risk management must be clearly use-case-specific and application-oriented to prevent the Act from becoming an impediment to AI design and deployment in Europe. Our collective aim should be to strike the right balance between protecting individuals and society from potential harms, and allowing the AI industry to innovate and grow, within meaningful guardrails – in the EU and beyond.