Council of Europe Convention on AI: UK, US, EU Sign First Legally Binding AI Framework

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The UK, US and the EU, amongst others, have become signatories to the Council of Europe Framework Convention on Artificial Intelligence (AI Convention), the “first legally binding international treaty aiming to ensure that AI systems are developed and utilised in ways that respect human rights, democracy and the rule of law”. Each signatory is known as a Party.
The AI Convention seeks to uphold the ethical development and regulation of AI. According to the Council of Europe’s explanatory report, it aims to provide “a common legal framework at the global level in order to apply the existing international and domestic legal obligations that are applicable to each Party.”
According to UK government (here), “Once the treaty is ratified and brought into effect in the UK, existing laws and measures will be enhanced”.
Here we summarise the key points.
The Convention aims
to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law
Each Party shall adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention. What those measures look like will reflect the severity and probability of occurrence of adverse impacts on “impacts on human rights, democracy and the rule of law throughout the lifecycle of artificial intelligence systems”.
“artificial intelligence system” means
a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different artificial intelligence systems vary in their levels of autonomy and adaptiveness after deployment.
The definition is intentionally drawn from the OECD’s definition of AI, which is also the definition used in the EU AI Act (see our flowchart for navigating the EU AI Act here). Further, according to the Conventions’ explanatory note, it is meant to “ensure legal precision and certainty, while also remaining sufficiently abstract and flexible to stay valid despite future technological developments”.
The Convention covers the activities within the AI lifecycle that have the potential to interfere with human rights, democracy and the rule of law.
The Convention applies to:
The Convention does not apply to:
The AI Convention sets out seven principles to ensure AI is developed and implemented ethically:
The Convention also sets obligations for Parties to adopt or maintain measures which so that remedies are available.
Further, the Convention requires each Party to adopt or maintain risk and impact management frameworks: The “‘severity’, ‘probability’, duration and reversibility of risks and impacts” should be evaluated, mitigated, and documented on an ongoing basis. Authorities can introduce different risk classifications or implement an outright ban.
More broadly, the AI Convention requires its implementation to be non-discriminatory, to respect the rights of persons with disabilities and of children, undergo public consultation, promote digital literacy and safeguard human rights.
The Convention is principles-based. That provides flexibility to accommodate the various legal systems and laws of the signatory Parties. However, it also means that what each principle looks like in practice may differ between jurisdictions.
The next step is for signatories to ratify the Convention. The Convention enters force three months after the date on which five signatories have expressed their consent to be bound by the Convention. For any subsequent Party that ratifies the Convention, it applies three months after that ratification is deposited to the Secretary General of the Council of Europe.
Afterwards, there are reporting and oversight mechanisms to monitor compliance. Whether and to what extent information about such compliance will be made public is not known.
The Convention was signed by the UK, US, the EU, Andorra, Georgia, Iceland, Israel, Norway, the Republic of Moldova and San Marino.
However, the Convention may have broader application. States, public sector and private sector organisations may look to it as a potential framework or set of principles to guide responsible AI. The 46 Council of Europe member states, the European Union and 11 non-member states (Argentina, Australia, Canada, Costa Rica, the Holy See, Israel, Japan, Mexico, Peru, the United States of America and Uruguay) negotiated the treaty. Representatives of the private sector, civil society and academia contributed as observers.
If you have any questions or would otherwise like to discuss any of the issues raised in this article, please contact David Varney, Tom Whittaker, Liz Smith, or another member of our Technology Team. For the latest updates on AI law, regulation, and governance, see our AI blog at: AI: Burges Salmon blog (burges-salmon.com).
This blog was written by Jenora Vaswani