European lawmakers are being urged to “focus on high-risk” use cases of AI as it finalises the comprehensive EU AI Act. Industry body the Computer and Communications Industry Association (CCIA Europe), which represents the interests of companies including Amazon, Apple, Google and Meta, says legislators should avoid regulating the development of the technology, particularly foundation and frontier models like GPT-4 from OpenAI.
The EU AI Act is one of the most detailed pieces of legislation governing the use and development of artificial intelligence in the world. It covers all types of AI including facial recognition, generative, and other types of automation. It is entering the final phase of development as EU member nations debate the various elements and negotiate over what to and not to include.
Countries around the world are debating ways to regulate the technology, particularly the more advanced next-generation models. The UK is reportedly pushing large AI labs like OpenAI, Anthropic and Google’s DeepMind to provide deep access to their models to safety researchers and government agencies before they are put on the market. The US has taken a similar approach with its voluntary AI code.
The EU has taken a more prescriptive approach to regulating AI, focusing on high-risk uses, development, and deployment. This is what CCIA Europe wants to tackle before the act becomes law later this year. The trade association is urging negotiators to keep the focus on high-risk uses and not to stifle innovation through over-regulation.
CCIA Europe represents AI developers, deployers, and users. The multiple organisations involved issued a joint statement expressing concerns taken by some of the politicians and member states involved in the final stage of the Act. The group said that while they support the overarching objectives of the Act around trust and innovation, they were concerned that some EU Parliament members want to move away from the core principles.
Duplication and lack of consistency
Its main concerns include overlapping rules, vague concepts, and a broad extension of a list of high-risk use cases and prohibited systems, suggesting it would create “unnecessary red tape and legal uncertainty.” It argues that a sensible regulatory framework promoting innovation is only achievable by ensuring rules solely target high-risk applications.
“To provide Europe’s thriving AI ecosystem with the legal certainty it needs, the AI Act has to avoid any duplication of existing legal requirements, such as copyright rules,” the CCIA warns. “Open source developers, deployers, and users should also be supported, with the final AI act introducing workable rules and reducing red tape to a minimum.”
The current negotiations could be a “make-or-break” moment for the EU’s ambitions to create a forward-looking legislative framework according to Boniface de Champris, CCIA Europe’s policy manager. “Today’s joint industry statement sends a clear message to EU lawmakers: if the final AI Act departs from its original risk-based approach, Europe risks stifling innovation,” De Champris said. “We call on the co-legislators to strike the right balance between regulating AI and fostering its development.”
The UK meanwhile is coming under pressure to broaden the scope of its upcoming AI Safety Summit to include all uses of the technology and not just the most advanced models. AI research body the Ada Lovelace Institute has warned that other forms of artificial intelligence also pose risks and should be considered.