Artificial Intelligence Liability: Proposed EU Directive

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
If an AI system causes someone harm, intentionally or by a negligent act or omission, will they be able to claim compensation for damages? The European Commission has proposed harmonised civil liability rules - the Artificial Intelligence Liability Directive (AILD) - to ensure that "persons harmed by artificial intelligence systems enjoy the same level of protection as persons harmed by other technologies."
This Directive lays down common rules on:
Here we summarise why AILD is necessary, its key provisions, and its interaction with the EU's Artificial Intelligence Act (AI Act). Whilst AILD may not come into force for a number of years, and then there is a two year transition period, those who procure, design, deploy and use AI systems should take note: there are significant AI regulations on the near-horizon with a clear direction of travel. The framework is relevant not just to EU users and providers but also to those placing on the market or putting into service AI systems in the EU or where the output produced AI systems is used in the EU.
AILD should be seen as part of the European Commission's broader work on new and emerging technologies. In its White Paper on Artificial Intelligence, the Commission undertook to promote the uptake of artificial intelligence and to address the risks associated with certain of its uses. The Commission proposed a legal framework for AI "which aims to address the risks generated by specific uses of AI through a set of rules focusing on the respect of fundamental rights and safety." The Commission also recognised the need to harmonise liability rules.
Liability is one of the top barriers to the use of AI by European countries according to an EU survey.
Current national liability rules, in particular based on fault, are not well suited to handling liability claims for damage caused by AI-enabled products and services. That is because victims of harm may need to prove a wrongful action or omission by a specific person who caused the damage. However, the nature of AI - including the complexity, autonomy and opacity of the systems - may make it difficult or impossible for victims to do so.
Further, there is a risk of divergence between Member States if there isn't a harmonised approach to liability for harm caused by AI systems. There is a cost to such divergence; a lack of legal certainty for those using AI systems and reduced trustworthiness of those AI systems. The EU's impact assessment estimates the additional market value for harmonising the liability rules for AI systems at between c.EUR 500 million and 1.1 billion.
AILD is for non-contractual liability and, potentially, state liability.
AILD does not apply to criminal liability and it does not affect:
In addition, as the Commission explains, 'Beyond the presumptions it establishes, [AILD] does not affect Union or national rules determining, for instance, which party has the burden of proof, what degree of certainty is required as regards the standard of proof, or how fault is defined.'
Member States may adopt national rules that are more favourable for claimants.
A claimant is a person bringing a claim for damages who:
the aim is to give more possibilities to persons injured by an AI system to have their claims assessed by a court
A defendant is the person against whom a claim for damages is brought.
A court, the claimant and defendant will need evidence for any claim. Whether that evidence is available can be a significant barrier to successfully scoping and starting any claim for damages.
AILD requires that Members States ensure that national courts are empowered to order disclosure of 'relevant evidence'
To obtain disclosure the potential claimant must:
National courts must also be empowered to order specific measures to preserve evidence.
The court's powers are limited to 'disclosure of evidence which is necessary and proportionate to support a potential claim or a claim for damages and the preservation to that which is necessary and proportionate to support such a claim for damages.'
Proportionality includes considering:
Failure to disclose or preserve evidence will mean that a court 'shall' presume the defendant's non-compliance with a relevant duty of care. The defendant can rebut that presumption.
In our view: The proposals are balanced and overall positive; by empowering national courts, AILD provides legal guidance to claimants that relevant evidence should be preserved and disclosed, and a mechanism by which that evidence can be obtained. Disclosure requirements will be balanced against legitimate interests of parties and in particular protection of trade secrets and confidentiality. However, there will still be practical issues for bringing a claim - what is 'relevant' evidence may be difficult to determine by a claimant, and it may be unclear which of the various stakeholders in an AI's lifecycle holds the relevant (potentially fragmented) evidence required to bring a claim.
The causal link between the fault of the defendant and the output produced by the AI system (or the AI system's failure to produce an output) requires all of the following conditions to be met:
How does the claimant demonstrate point 1 for a high-risk AI system, that the defendant was at fault?
If damages are claimed from a provider, the claimant must demonstrate that any of the following requirements were not met (taking into account the steps undertaken in and the results of the risk management system pursuant to certain obligations under the AI Act), namely that the AI system:
If damages are claimed from a user, the claimant must demonstrate that the user:
But point 1 is not met:
What if the damages are claimed from a defendant who used the AI system in the course of a personal, non-professional activity? Then the presumption laid down in paragraph 1 shall apply only where the defendant materially interfered with the conditions of the operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so.
In our view: again, these proposals are broadly positive. They provide greater legal certainty as to when, and how, the burden of proof is established. Yet again, there will be some practical difficulties for claimants and much of detail is not settled. For example, the AI Act is still the subject of debate and further amendment (as we have written about here), so what the obligations are under the AI Act which trigger the presumptions above, and how easy they are for a claimant to establish, remains to be seen.
AILD emphasises and complements the importance of the AI Act:
The Commission has proposed AILD but it is still subject to debate, amendment and oversight by various other EU functions. The EU AI Act was proposed in April 2021 and, one and a half years later, it is still being debated. However, they both have significant political support within the EU. How long it will be until the AI Act and AILD are in force is unknown but likely to be in the years rather than months or decades.
However, even once AILD is in force, Member States have another two years to bring into force the laws, regulations and administrative provisions necessary to comply with AILD.
If you would like to discuss how you procure, develop and deploy AI - the liability issues on what regulation is on the horizon - please contact Tom Whittaker or Brian Wong.
Whilst AILD may not come into force for a number of years, and then there is a two year transition period, those who procure, design, deploy and use AI systems should take note: there are significant AI regulations on the near-horizon with a clear direction of travel.