A leaked version of the EU’s forthcoming AI regulation has provided a glimpse of the bloc’s plans to tackle the governance of algorithms. While the draft legislation has been hailed by some as ambitious, others criticise it as disappointingly vague in key areas and say it doesn’t go far enough.
The leaked version of the paper surfaced on Wednesday and dates back to January. Some of its proposals are bold, outlining plans to ban entire classes of ‘high-risk’ AI applications, including AI used for indiscriminate surveillance and the manipulation of behaviour. Other high-risk applications are set to be subject to an intense regulatory regime that includes rigorous oversight and review mechanisms.
There are echoes of GDPR in the new regulation: regulators will be able to fine non-compliant companies up to €20m, or 4% of their worldwide turnover. The EU also plans to create a new “European Artificial Intelligence Board” to oversee the regime – despite current criticisms of the equivalent oversight mechanism for GDPR.
“If the leaked document really is close to the Commission’s proposals expected on 21 April then this will be as important as the GDPR,” says Robin Allen QC, speaking on behalf of the AI Law Consultancy. “For detail and reach it beats anything so far proposed in US legislation. It will change almost everything about the way in which engineers, business, and users, commission, design and deploy AI systems.”
The EU draft proposal contains the following:
- High-risk AI systems including those used to manipulate human behaviour, conduct social scoring or for indiscriminate surveillance will all be banned in the EU – although exemptions for use by the state or state contractors could apply.
- Special authorisation from authorities will be required for “remote biometric identification systems” such as facial recognition in public spaces.
- “High-risk” AI applications will require inspections before being deployed to make sure systems are trained on unbiased data sets, and with human oversight. These include those that pose a safety threat, such as self-driving cars, and those that could impact someone’s livelihood, e.g. hiring algorithms.
- People have to be told when they’re interacting with an AI system, unless this is “obvious from the circumstances and the context of use”.
- A database of “high-risk” AI systems will contain the data regarding high-risk AI systems, and be accessible to the public. Public sector systems would be exempt.
- A post-market monitoring plan will evaluate the continuous compliance of AI systems with the requirements of the regulation.
- These rules apply to EU companies and those that operate in the EU or impact EU citizens.
- Some companies will be permitted to carry out self-assessments, but others will be subject to third-party checks.
- A “European Artificial Intelligence Board” will be created, comprising representatives from every nation-state, to help the commission define “high-risk” AI systems.
- AI in the military is exempt from the regulation.
EU AI regulation: human rights groups have mixed feelings
Sarah Chander, senior policy adviser at Edri, says that the digital rights group has been pushing for the outright ban of AI that violates fundamental rights and is pleased to see the EU’s acknowledgement that some classes of it should not exist.
“However, the Commission provides a carve-out on public security uses, which are therefore not really bans,” says Chander. Edri would like to see predictive policing, all biometric mass-surveillance practices, automated recognition of sensitive traits such as gender identity, race and disability, and uses of AI at the border and in asylum cases prohibited too.
“By categorising these uses as ‘high risk’, subject only to self-assessment by providers – except biometric recognition which does undergo a third-party conformity check – the proposal risks giving a green light to some of the most harmful uses cases of AI,” says Chander.
Alexandra Geese, a German member of the Greens group in the European Parliament, says that the draft legislation is a good start, “but not strong enough in crucial points”. “It is a slap in the face of civil society that automatic facial recognition systems in public spaces are not abolished, although many citizens, MPs and thousands of petition signatories are pushing for this,” she says.
For high-risk AI systems that are not banned outright, the legislation proposes they are trained on high-quality data sets, be transparent and subject to human oversight, as well as ‘robust’ and accurate. All of this is good, says Daniel Leufer, policy adviser at rights group Access Now, but could end up legitimising technologies that are based on a flawed premise.
He cites the example of systems that infer gender from facial structure, which could negatively impact non-binary and transgender people. “There are, and will be, many cases where the intended aim of a system is fundamentally problematic from a scientific or human rights perspective,” he says. “We, therefore, need to ensure that the intention behind systems, and how they propose to accomplish their aims, is in conformity with the highest human rights and scientific standards.”
AI by another name?
Legal experts have identified an even more fundamental flaw with the new EU AI regulation – the definition of AI itself. The proposed legislation defines AI in the following way:
(1) ‘artificial intelligence system or AI system’ means software that is developed with one or more of the approaches and techniques listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy. An AI system can be used as a component of a product, also when not embedded therein, or on a stand-alone basis and its outputs may serve to partially or fully automate certain activities, including the provision of a service, the management of a process, the making of a decision or the taking of an action
Tomasz Zalewski, partner at Bird & Bird law firm, says the long-winded definition “does not take into account the results of the work of many experts, including EU experts”. This includes, for example, the definitions devised by the High Level Expert Group on AI convened by the European Commission and the OECD.
“These definitions are synthetic and resistant to technological change,” says Zalewski, because of a consensus that the definition of AI shouldn’t be based on the description of technologies “in order to avoid the need of constant updating when the technology is upgraded or changed”.
Zalewski says this is a flaw in the way the report defines AI. “It seems that the definition proposed in the draft is a definition used not by lawyers but by marketing people,” says Zalewski.
Virginia Dignum, professor of AI at Umeå University, also critiqued the vague definition of AI in the document, saying it “[opens] the door to circumvent the prohibition”. “I would rather see the regulation focusing on properties or possible results, rather than on techniques,” she told Tech Monitor.
Some have even speculated that companies might try to exploit the imprecision in language to dodge regulation, by using terms such as ‘big data’ or ‘data science’.
Imprecision and subjectivity
The imprecision of the draft legislation goes further than the definition of AI. Leufer says the report “contains some vague proposals which could lead to problematic loopholes that could undermine the protection of fundamental rights”.
One area that has raised eyebrows is the part of the report which reads: AI systems designed or used in a manner that exploits information or prediction about a person or group of persons in order to target their vulnerabilities or special circumstances, causing a person to behave, form an opinion or take a decision to their detriment.
Some have pointed out that this sounds a lot like Facebook content-surfacing algorithms, or even Amazon’s product recommendation AI. “How would we determine what is to somebody’s detriment? And who does this assessment?” questions Leufer.
Sandboxing for innovation
There are concerns from some that the regulation won’t sufficiently stimulate innovation.
“Importantly, the Regulation does little to incentivise and support EU innovation and entrepreneurship in this space,” says Omer Tene, vice president of nonprofit IAPP (The International Association of Privacy Professionals). “The idea of competing with AI leaders such as the US and China through regulatory innovation is untested. To be sure, the EU must protect fundamental rights; but it should also ensure competitiveness of EU innovators in the global market.”
Under “measures in support of innovation”, the report proposes AI regulatory sandboxing schemes. These may be established by member states and/or the European data protection supervisor “to facilitate the development and testing of innovative AI systems under strict regulatory oversight before AI systems are placed on the market or otherwise put into service”.
To reduce the regulatory burden on SMEs, they will get first access to these sandboxes; benefit from “awareness-raising activities” about the regulation; and guidance from authorities. They’ll also get priority access to “digital hubs and testing experimentation facilities”, which will support businesses in making sure they’re compliant with the new regulation.
Despite this, there are concerns that the regulation, like GDPR, might impact SMEs the most. “It is unacceptable that applications of SMEs are scrutinised in detail, while the large platforms might go unscathed because national authorities refuse to act,” says Geese.
UCL digital rights and regulation professor Michael Veale predicts: “There’ll be a big push to make this a scale-based regime of applicability as these aren’t many concessions.”
But despite the imperfections of the draft legislation, many hope that the finalised version will address these issues. “Leaking is a ‘political tool’,” says Dignum. “A reason to leak is to gauge the reactions to it.”
Leufer says that while “the proposals for prohibitions and high-risk AI in this leaked version fall short of demands made by civil society in a number of areas, they do provide a basis upon which to work”.
The finalised legislation could well look very different. “The European Parliament and the EU member states still have to agree on amendments and, eventually, on a final text,” points out Dr Frederik Zuiderveen Borgesius, professor ICT and law at Radboud University. “We can expect much political discussion and much lobbying. The process will take two years at least.”
Home page image courtesy Gopixa/Shutterstock