The EU’s Artificial Intelligence Act (AIA) is the world’s first attempt to legislate the emerging technology, but the proposed regulation has a long way to go before becoming law. In the coming months, it will be debated in the European Parliament in what is expected to be an intense and heavily lobbied process. Tech Monitor canvased the views of industry bodies and MEPs to identify the most contentious issues in the debate.
Debating definitions: The technology industry’s response to the AI Act
The European Commission’s open consultation on the AI Act attracted 304 feedback submissions – far more than other tech bills. Tech companies and industry bodies put forward the positions they will likely be lobbying hard for over the coming months. On the whole, they demand greater clarity on aspects of the AIA, including the definition of AI, the classification of ‘high-risk’ AI applications – which will be subject to stricter regulation and, in some cases, outright bans – and some of the proposed AI ‘harms’.
TechUK, a body representing the UK tech industry, wrote in its feedback submission that the current definition of AI is too broad and “goes beyond what would normally be considered as ‘intelligent’”, saying the legislation appears to cover statistical software and even general computer programmes. Huawei and IBM echoed these concerns by requesting a narrower definition of ‘AI systems’ in their own submissions.
The AIA’s current definition of AI covers machine learning approaches (including reinforcement learning and deep learning); logic and knowledge-based approaches (including inference and deductive engines); and statistical approaches, Bayesian estimation, and search and optimisation methods. It defines AI as the use of any of these techniques that “for a given set of human-defined objectives, generate outputs, such as content, predictions, recommendations, or decisions influencing the environments they interact with”.
Concerns about the regulation’s definition of AI aren’t unique to the tech industry. In April, when the draft law was revealed, lawyers and academics told Tech Monitor that the current definitions were too technical and, therefore, likely to become quickly outdated. They advocated for using a definition based on properties or effects, rather than techniques, that more closely mirrored the definition put together by the High Level Expert Group (HLEG) on AI that was convened by the EU and the OECD.
TechUK also takes issue with the classification of high-risk AI systems. Some high-risk AI systems listed in the AIA include biometric identification, management of critical infrastructure, and AI used for law enforcement or for border control management. TechUK posits that the definition for what qualifies as ‘high-risk’ is “overly broad and would encompass AI applications that are not intended to be covered by the Regulation”. It instead proposes designating high-risk AI systems as only those that have a high probability of harming a user.
Tech companies also take umbrage at the AIA’s prohibition of “subliminal techniques beyond a person’s consciousness in order to materially distort behaviour”. Tech companies that use machine learning to serve targeted ads like Facebook are particularly keen to have this phrase clarified. TechUK calls for greater clarity on what it means and what legal standards would be used to assess it.
SME concerns about standards and compliance costs
Small and medium companies have a different set of concerns than bigger tech companies. “We usually like it when the Commission proposes a single piece of legislation for the digital market because, for SMEs, it’s much more complicated to navigate different legal environments in different member states,” says Annika Linck, senior policy manager at the European Digital SME Alliance, a body that represents SMEs in the ICT sector in Europe. But the body is critical of the AIA, “because there is a large reliance on standards coming out of standardisation organisations, which usually have very low SME representation”.
“There is a large reliance on standards coming out of standardisation organisations, which usually have very low SME representation.”
Annika Linck, European Digital SME Alliance
SMEs tend not to have time to participate in technical committees, says Linck, meaning that these standards are likely to be shaped by large companies. The concern is that SMEs will struggle to conform to the compliance assessments based on these standards. If the EU opts for a standards-led approach, it should ensure SME representation on the standards bodies, perhaps even by paying them to participate, suggests Linck.
Compliance costs are another area worrying SMEs. The EU has currently estimated a cost of around €7,000 for companies to fulfil a conformity assessment. “However, when talking to SMEs, this doesn’t seem very realistic in practice,” says Linck, because this is only the auditing costs. She says there would also be HR costs for the staff who need to prepare for the assessment internally, as well as the possibility of having to hire external consultants.
MEPs’ response to the AI Act
Attitudes among MEPs, meanwhile, range from concerns about stifling innovation to fears that risks to human rights will be left unchecked. “AI will determine whether we remain competitive in the digital sphere,” says Axel Voss, a German member of the centre-right European People’s Party in the European Parliament. “The legislation should cover high-level ethical standards combined with appropriate liability rules while leaving enough space for innovation, especially for SMEs. It will be crucial to find the right balance between privacy, security and innovation and avoid over-regulating the AI market.”
“It will be crucial to find the right balance between privacy, security and innovation and avoid over-regulating the AI market.”
Axel Voss, European People’s Party
Dragoș Tudorache, head of the European Parliament’s AI committee has also emphasised the importance of ensuring that the new regulatory regime doesn’t stifle AI innovation.
Other groups are fear that the AI Act places the interests of businesses above those of citizens. The draft regulation is “company-centric” not “human-centric”, says Alexandra Geese, a German member of the Greens group in the European parliament. “The fundamental rights concerns, and the concerns about what artificial intelligence technologies could do to our society, and especially to [more vulnerable] population groups, are not addressed [by the AIA].”
Geese is concerned that Big Tech’s lobbying machine will exert undue influence on the outcome of the parliamentary deliberations. A recent report from Corporate Europe and LobbyControl found that of the 271 meetings European Commission officials held on the Digital Services Act – another piece of blockbuster digital regulation – 75% of them with industry lobbyists. Google and Facebook led the pack.
An analysis by the New Statesman found a similar pattern at the European parliament level. Some MEPs and digital rights groups have concerns that tech giants might exert the most influence over the debate on the AIA too.
The battle over facial recognition
Facial recognition is shaping up to be one of the most contentious battlegrounds in the parliamentary process. “Biometric identification fits into the classical digital rights and privacy debate, and there’s already campaigning going on,” says Geese. “But I think it will be difficult because you have the ministers of interior who are adamant about wanting it. So that’s going to be an interesting debate.”
The AIA currently prohibits ‘real-time remote biometric identification in publicly accessible spaces for the purpose of law enforcement unless certain limited exceptions apply’, but the European Digital Rights group (Edri) argues this “addresses only a small range of the many practices that can constitute biometric mass surveillance”.
Edri puts forward proposals for much stronger bans including prohibiting the use of remote biometric identification in publicly accessible spaces for any purpose, and a general ban on any use of AI for automated recognition of human features in publicly accessible spaces.
Edri also argues for a ban on AI used in the field of law enforcement or criminal justice that purports to predict future behaviour, uses of AI in the field of migration control in ways that undermine the right to claim asylum, and a range of other applications currently deemed ‘high-risk’ but not prohibited, arguing these are “incompatible with fundamental rights and democracy”. Rights group Access Now argues that emotion recognition systems, of the like used by the US Immigration and Customs Enforcement, should also be banned outright.
“I demand that legal backdoors for mass surveillance through biometric recognition software… are closed.”
Svenja Hahn, Free Democratic Party
These are the battles where tensions over fundamental rights will come to the fore. “I demand that legal backdoors for mass surveillance through biometric recognition software, that are currently in the Commission’s AI Act proposal, are closed,” says Svenja Hahn, a German member of the Free Democratic Party in the European Parliament. “We have to safeguard our citizen’s right to anonymity in public.”
MEPs on the other side of the debate object to the prohibition of entire classes of AI. “While we should regulate high-risk AI systems, disproportionate requirements for AI products and services would weaken research and innovation, our European growth potential and our international competitiveness,” says Voss. “Instead of bans for entire aspects of AI, such as facial recognition, although such technologies also [provide] benefits for our security, we should stick to the principle of regulating ‘as much as necessary, as little as possible’.”