View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
September 29, 2023

Big Tech calls for EU to ‘focus on high risk’ use cases in AI Act

Companies building AI systems have called for a less prescriptive approach to regulating the technology.

By Ryan Morrison

European lawmakers are being urged to “focus on high-risk” use cases of AI as it finalises the comprehensive EU AI Act. Industry body the Computer and Communications Industry Association (CCIA Europe), which represents the interests of companies including Amazon, Apple, Google and Meta, says legislators should avoid regulating the development of the technology, particularly foundation and frontier models like GPT-4 from OpenAI.

The EU AI Act seeks to govern and regulate all aspects of AI from generative tools like ChatGPT to surveillance technology and automation (Photo: Ascannio / Shutterstock)
The EU AI Act seeks to govern and regulate all aspects of AI from generative tools like ChatGPT to surveillance technology and automation (Photo by Ascannio/Shutterstock)

The EU AI Act is one of the most detailed pieces of legislation governing the use and development of artificial intelligence in the world. It covers all types of AI including facial recognition, generative, and other types of automation. It is entering the final phase of development as EU member nations debate the various elements and negotiate over what to and not to include.

Countries around the world are debating ways to regulate the technology, particularly the more advanced next-generation models. The UK is reportedly pushing large AI labs like OpenAI, Anthropic and Google’s DeepMind to provide deep access to their models to safety researchers and government agencies before they are put on the market. The US has taken a similar approach with its voluntary AI code.

The EU has taken a more prescriptive approach to regulating AI, focusing on high-risk uses, development, and deployment. This is what CCIA Europe wants to tackle before the act becomes law later this year. The trade association is urging negotiators to keep the focus on high-risk uses and not to stifle innovation through over-regulation.

CCIA Europe represents AI developers, deployers, and users. The multiple organisations involved issued a joint statement expressing concerns taken by some of the politicians and member states involved in the final stage of the Act. The group said that while they support the overarching objectives of the Act around trust and innovation, they were concerned that some EU Parliament members want to move away from the core principles.

Duplication and lack of consistency

Its main concerns include overlapping rules, vague concepts, and a broad extension of a list of high-risk use cases and prohibited systems, suggesting it would create “unnecessary red tape and legal uncertainty.” It argues that a sensible regulatory framework promoting innovation is only achievable by ensuring rules solely target high-risk applications.

“To provide Europe’s thriving AI ecosystem with the legal certainty it needs, the AI Act has to avoid any duplication of existing legal requirements, such as copyright rules,” the CCIA warns. “Open source developers, deployers, and users should also be supported, with the final AI act introducing workable rules and reducing red tape to a minimum.”

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

The current negotiations could be a “make-or-break” moment for the EU’s ambitions to create a forward-looking legislative framework according to Boniface de Champris, CCIA Europe’s policy manager. “Today’s joint industry statement sends a clear message to EU lawmakers: if the final AI Act departs from its original risk-based approach, Europe risks stifling innovation,” De Champris said. “We call on the co-legislators to strike the right balance between regulating AI and fostering its development.”

The UK meanwhile is coming under pressure to broaden the scope of its upcoming AI Safety Summit to include all uses of the technology and not just the most advanced models. AI research body the Ada Lovelace Institute has warned that other forms of artificial intelligence also pose risks and should be considered.

Read more: AI safety summit: government urged to look beyond frontier models

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU