Background
Household appliances, vehicles, medical equipment, drones and other products are increasingly using AI and, in particular, machine learning and NLP (natural language processing) technologies to automate decision making. The increasing degree of autonomy facilitated by AI has many advantages but also gives rise to unknown risks. In particular, what happens when AI is deployed and the product causes injury or loss?
The specific characteristics of these technologies and their applications – including complexity, modification through updates or self-learning during operation and limited predictability – may make it more difficult to determine what went wrong and who should bear liability if it does. Determining who should be liable can be problematic as there are often many parties involved in an AI system (data provider, designer, manufacturer, programmer, developer, user and AI system itself). Further complications may arise if the fault or defect arises from decisions the AI system has made itself based on machine learning principles with limited or no human intervention.
The European Commission has published its latest report on this matter, which looks at whether the existing liability regimes in the EU are sufficient for the purposes of apportioning liability in relation to AI and emerging technologies. The report goes onto suggest ways of addressing some of the challenges posed by such technologies to ensure a fair and efficient allocation of liability.
Potential for harm?
The complexities which arise from AI and in particular, machine learning may lead to damage, loss or injury in a number of different contexts. For example:
- IPR ownership challenges - AI’s ability to create works that would otherwise be recognised as IP created by a human raises questions as to who owns such IP, and moreover, who is liable when such works infringe another party’s IPR?
- Privacy – whilst smart home devices such as Amazon’s Alexa are meant to make life easier, these devices also collect a huge amount of data (including personal data) which, if hacked or compromised, could lead to a rise in claims under data privacy laws.
- Economic losses – businesses are increasingly using AI to make business decisions. For example, in the financial services industry, AI is deployed to analyse contracts and make investment decisions. If mistakes are made, this could lead to significant financial loss for businesses.
- Discrimination – it is being increasingly acknowledged that AI systems are often biased, particularly along racial and gender lines, and when used in recruitment and by the police to outsource decisions, this could disadvantage certain groups.
- Personal injury claims – the rise of autonomous vehicles and the increasing use of AI in the area of medical diagnosis inevitably introduces the risk of AI-related personal injury and bodily harm. Where AI technology is eventually used on a fully autonomous basis, new questions of liability will arise in the event of an accident or misdiagnosis.
Existing liability regime
The UK does not currently have a liability framework specifically applicable to harm or loss resulting from the use of emerging technologies such as AI. By way of exception, the UK has passed the Automated and Electric Vehicles Act 2018 pursuant to which liability for damage caused by an insured automated vehicle when driving itself lies with the insurer. Otherwise, redress for victims who suffer damage as a result of a failure of AI would most likely be sought under existing laws on damages in contract, consumer protection legislation and the tort of negligence.
The UK’s existing liability regime is largely causative and fault-based. So, to seek redress under the tort of negligence, the claimant needs to prove that:
- the defendant owed a duty of care to the claimant;
- the defendant breached that duty; and
- that breach caused injuries to the claimant and were foreseeable.
Proving liability in tort can be difficult, and there are many factors to be taken into consideration where AI products cause harm – for example, was the defect attributable to the design of the product, the programming or the user when in use? Moreover, where AI is deployed on an autonomous basis with limited or no human intervention, it may become more problematic to establish foreseeability.
Similarly, to claim damages under contract, the claimant needs to prove that the defendant breached a term of the contract and that breach caused loss. Whilst this may be relatively straightforward with simple products, establishing causation in relation to AI products may not be possible if the defect cannot be traced back to human error. This could result in nobody at all being liable.
An exception to the fault-based liability regime comes under the EU’s product liability laws (such as the Directive on Liability for Defective Products and the Product Safety Directive). This imposes strict liability on the producer of a defective product for damage caused by the defect, which means that fault or negligence does not need to be established.
European Commission findings
The report concludes that the existing liability regime ensures at least basic protection for victims whose damage is caused by the operation of such new technologies. However, the report also acknowledges that (a) the specific characteristics and complexities of these technologies and their applications may make it more difficult to offer victims compensation in all cases where this seems justified and (b) this may not offer a fair and efficient allocation of liability in all cases.
To rectify this, the report recommends that certain adjustments need to be made to the existing liability regime and outlines key findings on how the existing liability regime should be designed and adjusted as follows:
- Strict liability: strict liability is an appropriate response to the risks posed by emerging digital technologies which carry an increased risk of harm to individuals (e.g. AI driven robots in public spaces). Strict liability should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation (operator).
- Adapted range of duties of care: operators of emerging digital technologies should have to comply with an adapted range of duties of care, including with regard to choosing the right system, monitoring and maintaining the system. There should be a duty on producers to equip technology with means of recording information about the operation of the technology (logging by design) if such information is typically essential for establishing whether a risk of the technology materialised.
- Allocation of liability
- manufacturers of products or digital content incorporating emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it has been placed on the market;
- if there are two or more operators, in particular (a) the person primarily deciding on and benefitting from the use of the relevant technology (frontend operator) and (b) the person continuously defining the features of the relevant technology and providing essential and ongoing backend support (backend operator), strict liability should lie with the one who has more control over the risks of the operation;
- producers should be strictly liable for defects in emerging technologies even if said defects appear after the product was put into circulation, as long as the producer was still in control of updates to, or upgrades on the technology.
- Joint and several liability: where two or more persons cooperate on a contractual or similar basis in the provision of different elements of a commercial and technological unit, and where the victim can demonstrate that at least one element has caused damage in a way triggering liability but not which element, all potential liable parties should be jointly and severally liable vis-à-vis the victim.
- Proving causation: where a particular technology increases the difficulties of proving the existence of an element of liability beyond what can be reasonably expected, victims should be entitled to facilitation of proof.
- Insurance: for situations exposing third parties to an increased risk of harm, compulsory liability insurance could give victims better access to compensation and protect potential liable parties against the risk of liability.
- Separate legal personality not necessary: for the purposes of liability, it is not necessary to give devices or autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.
The report further makes clear that a person using a technology which has a certain degree of autonomy should not be less accountable for ensuring harm than if said harm had been caused by a human auxiliary.
Comment
Whilst there appears to be consensus that the existing liability regime is generally sufficient to address the current risks posed by emerging technologies, as AI technology continues to evolve, we anticipate that the existing legislative framework for tort and product liability will need to be adapted accordingly.
In the meantime, businesses will have to assess whether or not they are sufficiently protected against liability risks arising from such emerging technologies, be it as operators, users or manufacturers. This could be by way of contractual arrangements with suppliers and/or customers (e.g. warranties and indemnities) or by taking out appropriate insurance coverage.
How can Burges Salmon help?
For further information, please contact Helen Scott-Lawler or Amanda Leiu.
This article was written by Amanda Leiu, a senior associate in the Commercial team at Burges Salmon.