View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
September 29, 2022updated 27 Oct 2022 1:37pm

The EU wants to make it easier to sue over harms caused by AI

A new directive makes linking AI systems to harmful outcomes easier. But where liability will ultimately sit is unclear.

By Ryan Morrison

People and companies that are harmed in any way by a drone, robot or software driven by artificial intelligence will be able to sue for compensation under new EU rules. While this offers extra protection to citizens, questions remain over where the buck stops in the supply chain.

Broken drones and faulty software that leads to personal harm fall under the new AI Liability Directive (Photo: Chetty Thomas/Shutterstock)
Broken drones and faulty software that lead to personal harm fall under the new AI Liability Directive. (Photo by Chetty Thomas/Shutterstock)

The AI Liability Directive brings together a patchwork of national rules from across all 27 member countries and is designed to “ensure that victims benefit from the same standards of protection when harmed by AI products or services, as they would if harm was caused under any other circumstances”.

Victims will be able to sue the developers, providers, and users of AI technology for compensation if they suffered harm to their life, property, health and privacy due to a fault or omission caused by AI and can also sue if discriminated against during a recruitment process that used AI but it is unclear where overall responsibility will lie under the current draft version of the directive,

Guillaume Couneson, partner at law firm Linklaters told Tech Monitor the directive “does not indicate against whom the victim of damages caused by an AI system should file its claim. It envisages that the defendant could be the provider or the user of the AI system.

“So let’s say a recruitment company uses an AI made by a third party to filter CVs and it automatically dismisses people from minority backgrounds. Would the developer or the recruitment company be at fault? The principle remains that the party which committed the fault is liable.”

It doesn’t institute a no-fault liability regime, but Couneson suggests this may come in the future – rather it aims to help the victim of an AI-caused harm to provide evidence in court. “It does so via two mechanisms, namely an obligation for the defendant to disclose evidence in certain circumstances and the presumption of a causal link between the fault of the defendant and the (failed) output of the AI system.”

EU artificial intelligence laws: the presumption of causality

This is done via the introduction of a “presumption of causality” which means victims only have to show that there was a failure to comply with certain requirements that led to the harm, then link this to the AI.

Content from our partners
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape

The directive covers tangible and intangible unsafe products, which includes software that is standalone or embedded and digital services that are needed to make the product work.

BSA, the software alliance industry association broadly supports efforts to harmonise AI rules across Europe but warns of a need for greater clarity over responsibility if AI goes wrong – particularly whether it should fall on a developer, deployer or operator.

“EU policymakers should clarify the allocation of responsibility along the AI value chain to make sure that responsibilities for compliance and liability are assigned to the entities best placed to mitigate harms and risks,” said Matteo Quattrocchi, BSA policy director for Europe.

“The goals of AI governance should be to promote innovation and enhance public trust. Earning trust will depend on ensuring that existing legal protections continue to apply to the use of artificial intelligence.”

Bart Willemsen, analyst at Gartner, told Tech Monitor the new amendment “puts victims of negative impact through AI-based decision making in a stronger position than when things are left in a ‘computer says no’ type of world, something we all very much should want to prevent”.

He said it also ties in with the new European Union AI Act, which is incredibly broad and addresses anyone putting an AI system into the market, those using the system within the EU but also any company outside the EU producing systems that will be used or deployed in the EU.

How tech leaders should approach the new AI rules

The impact of AI can vary from minimal to damaging if managed incorrectly and so the EU has updated legislation to make it easier to take action when it is mismanaged. The UK has similar proposals under the new AI Framework, taking a ‘risk-based’ approach to the legislation.

Willemsen says there are high-profile cases where AI and algorithms have had a dangerous impact on certain groups, citing Instagram and TikTok’s impact on mental disorders and young teenagers, and the effects of data harvesting through companies such as Cambridge Analytica on the political agenda.

“The liability clauses are therefore in line with the prohibitions with which the AI Regulation starts off to begin with,” he said. “The point of the AI liability directive here, is to empower victims of things like the above and of similar negative effects from AI usage and simplifies the legal process.”

He explained that the “presumption of causality” was particularly important as it means laymen will not have to go into too much technical detail about an AI model to prove it was involved in the harm, but so was the ability for victims to demand information from companies.

Companies must define organisational roles and assignments for managing AI trust, risk and security management, Willemsen warned, this will include privacy protection as part of preparing for the introduction of these new AI liability rules.

Its also important to “document the intentions of each AI model, including its function in the ecosystem of deployment, desired bias controls, and optimal business outcomes”.

Finally, he warned companies should avoid deploying AI models on individuals if it can be prevented, such as when using digital twin technology as it’s enough to address a “persona rather than a person” and always hold activities and technologies to “sufficient moral and societal standards”.

Overall the new liabilities are designed to modernise and reinforce existing liability rules already in place for manufacturers, but expanding them to consider automation and artificial intelligence.

Commissioner for Internal Market, Thierry Breton, said in a statement: “The new rules will reflect global value chains, foster innovation and consumer trust, and provide stronger legal certainty for businesses involved in the green and digital transition.

Read more: UK government sets out AI regulation plans

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU