Rules around the use of data in training large language artificial intelligence models (LLMs) and generative AI in the new draft EU AI Act would jeopardise Europe’s competitiveness, a group of industry leaders have warned. An open letter signed by executives from 160 companies including Meta, Renault and Siemens calls on lawmakers to think again about the draft legislation.
The EU AI Act is poised to become the first comprehensive artificial intelligence legislation in the world, but late-stage additions around the training, use and governance of general purpose AI have proved controversial. The act takes a largely risk-based approach to AI regulation, putting an emphasis on use case rather than development, but when it comes to tools like ChatGPT there are stricter requirements on the developers.
Draft rules governing general purpose AI include a requirement to disclose AI-generated content and provide a method to distinguish deepfake images from real images. Most of the measures are around transparency and data protection rights. Models would also have to be designed to prevent them from generating illegal content and there would be a requirement to publish summaries of any copyrighted data used in training.
These are among the changes that led to some of the AI industry leaders to write the open letter. In it they warn that under the current rules, as drafted, generative AI would become too heavily regulated, causing companies operating in the EU to face high compliance costs and “disproportionate liability risks”.
It isn’t just the AI labs signing the open letter. Executives from Germany’s Siemens and Airbus in France have also called the rules harmful and anti-competitive. They argue that AI offers Europe “the chance to re-join the technological avant-garde” but that regulation would stifle the opportunity.
“In our assessment, the draft legislation would jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the group of executives, which also includes Meta’s Yann LeCun, wrote.
EU AI Act should take a ‘risk-based approach’ to regulation
Companies have said they could be forced to leave the EU if regulation becomes too burdensome. The letter calls for less stringent regulations, retaining a “risk-based approach” and an industry body to monitor the implementation of legislation rather than lawmakers.
It runs counter to an earlier letter signed by Elon Musk and OpenAI’s Sam Altman urgent a “pause” on development of major new AI models until regulation catches up. Although it has been reported that at the time that letter came out OpenAI was lobbying the EU to exclude its LLM, GPT-4, from being classed as “high-risk”.
There is a global race to attract AI talent and companies. While the companies do see a need for regulation, in part due to the fact it would help ease enterprise user concerns, they are pushing for regulation to be on end use not development. Companies such as OpenAI are also focused on driving regulation of future advanced systems, known as artificial general intelligence, rather than current tools.
The UK is taking a more light-touch approach to AI regulation, although there have been signs that this could be about to change. At present, the focus from the Rishi Sunak government appears to be on AI safety and guardrails, rather than direct regulation built into legislation. It does seem to be paying off with OpenAI becoming the latest major AI lab to open an office in the UK, following Anthropic and Google DeepMind. Enterprise AI platform Synthesia and Stable Diffusion co-creator Stability AI are also in the UK.