View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

EU warned AI act could harm the industry

Industry leaders have called on the EU to tone-down regulation of general purpose AI and allow industry to monitor its deployment.

By Ryan Morrison

Rules around the use of data in training large language artificial intelligence models (LLMs) and generative AI in the new draft EU AI Act would jeopardise Europe’s competitiveness, a group of industry leaders have warned. An open letter signed by executives from 160 companies including Meta, Renault and Siemens calls on lawmakers to think again about the draft legislation.

The open letter calls on lawmakers to reconsider plans for tough regulation of generative AI tools like ChatGPT or Stable Diffusion (Photo: rafapress / Shutterstock)
The open letter calls on lawmakers to reconsider plans for tough regulation of generative AI tools like ChatGPT or Stable Diffusion. (Photo by rafapress / Shutterstock)

The EU AI Act is poised to become the first comprehensive artificial intelligence legislation in the world, but late-stage additions around the training, use and governance of general purpose AI have proved controversial. The act takes a largely risk-based approach to AI regulation, putting an emphasis on use case rather than development, but when it comes to tools like ChatGPT there are stricter requirements on the developers.

Draft rules governing general purpose AI include a requirement to disclose AI-generated content and provide a method to distinguish deepfake images from real images. Most of the measures are around transparency and data protection rights. Models would also have to be designed to prevent them from generating illegal content and there would be a requirement to publish summaries of any copyrighted data used in training.

These are among the changes that led to some of the AI industry leaders to write the open letter. In it they warn that under the current rules, as drafted, generative AI would become too heavily regulated, causing companies operating in the EU to face high compliance costs and “disproportionate liability risks”.

It isn’t just the AI labs signing the open letter. Executives from Germany’s Siemens and Airbus in France have also called the rules harmful and anti-competitive. They argue that AI offers Europe “the chance to re-join the technological avant-garde” but that regulation would stifle the opportunity.

“In our assessment, the draft legislation would jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” the group of executives, which also includes Meta’s Yann LeCun, wrote.

EU AI Act should take a ‘risk-based approach’ to regulation

Companies have said they could be forced to leave the EU if regulation becomes too burdensome. The letter calls for less stringent regulations, retaining a “risk-based approach” and an industry body to monitor the implementation of legislation rather than lawmakers.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

It runs counter to an earlier letter signed by Elon Musk and OpenAI’s Sam Altman urgent a “pause” on development of major new AI models until regulation catches up. Although it has been reported that at the time that letter came out OpenAI was lobbying the EU to exclude its LLM, GPT-4, from being classed as “high-risk”.

There is a global race to attract AI talent and companies. While the companies do see a need for regulation, in part due to the fact it would help ease enterprise user concerns, they are pushing for regulation to be on end use not development. Companies such as OpenAI are also focused on driving regulation of future advanced systems, known as artificial general intelligence, rather than current tools.

The UK is taking a more light-touch approach to AI regulation, although there have been signs that this could be about to change. At present, the focus from the Rishi Sunak government appears to be on AI safety and guardrails, rather than direct regulation built into legislation. It does seem to be paying off with OpenAI becoming the latest major AI lab to open an office in the UK, following Anthropic and Google DeepMind. Enterprise AI platform Synthesia and Stable Diffusion co-creator Stability AI are also in the UK.

Read more: Digital exclusion undermines UK’s AI ambitions – Lords report

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU