European Union lawmakers have passed the EU AI Act that will govern use and deployment of artificial intelligence technology within the EU. This includes a controversial amendment that bans the use of facial recognition technology in public spaces. The draft legislation is still subject to changes as individual European Union countries have to agree before it becomes legislation, but this is a significant step in its progress to becoming law.
The rules were proposed by the European Commission and cover any use of AI technology but had to be adapted to consider the implications of generative AI. The technology wasn’t widely considered when the law was first drafted, but the success of ChatGPT and the subsequent mass roll-out of other automation tools forced the EU to act.
Changes introduced by MEPs to the original commission draft act include some top-level regulation of general-purpose AI tools such as ChatGPT. These foundation models will require mandatory labelling for AI-generated content and the forced disclosure of training data covered by copyright.
“AI raises a lot of questions – socially, ethically, economically,” declared Thierry Breton, EU commissioner for the internal market. Speaking to reporters, he said: “Now is not the time to hit any ‘pause button’. On the contrary, it is about acting fast and taking responsibility.”
Other changes include a fine-tuned list of prohibited practices, extended to include subliminal techniques, biometric categorisation, predictive policing, and internet-scraped facial recognition databases. Emotion recognition techniques have also been banned from use in law enforcement.
MEPs will now enter into negotiations with the representatives of each European government and the European Commission – known as a trilogue – to flesh out specifics. The first is due to be held tonight and will intensify from July when Spain takes over the presidency of the Council of Ministers as it has made the AI law a top priority.
EU AI Act: Legislation must keep pace with tech
Alex Hazell, head of privacy and legal for Europe at marketing tech vendor Acxiom, told Tech Monitor that rapid developments in AI make it challenging for regulators to keep up. “Generative AI is an excellent example of how technology has outpaced the law,” Hazell says. “However, in its rush to position itself as a global leader, the EU can’t forget the importance of protecting its people by ensuring its AI Act in its final form does not water down or contradict the privacy protections provided by the GDPR.”
“By the time the AI Act is finalised, we expect to see for instance combined data protection and artificial intelligence impact assessments which consider and address the harms and risks in the use of generative AI tools and similar technologies. The AI Act must work in tandem with existing regulation to create a stable framework that is optimised for everyone, and close collaboration between stakeholders is key to striking a balance between innovation and privacy.”
Edward Machin, a senior lawyer in the data, privacy and cybersecurity team at law firm Ropes and Gray, said despite the hype around generative AI, the EU AI Act was always intended to focus on a broad range of high-risk uses beyond just chatbots. “The AI Act is shaping up to be the world’s strictest law on artificial intelligence and will be the benchmark against which other legislation is judged,” Machin says.
He adds: “It remains to be seen whether the UK will have second thoughts about its light-touch approach to regulation in the face of growing public concern around AI, but in any event the AI Act will continue to influence lawmakers in Europe and beyond for the foreseeable future.”