This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.

Search the website

OECD publishes paper on AI incident reporting

Picture of Tom Whittaker

The Organisation for Economic Co-operation and Development (OECD) has published a paper on working towards a common framework for AI incident reporting.

According to the OECD:

  • This paper presents a common framework for reporting artificial intelligence (AI) incidents that provides a global benchmark for stakeholders across jurisdictions and sectors. 
  • The framework enables countries to adopt a common reporting approach while allowing them to tailor responses to their domestic policies and legal frameworks. 
  • Through its 29 criteria, the framework aims to help policymakers understand AI incidents across diverse contexts, identify high-risk systems, assess current and emerging risks, and evaluate the impact of AI on people and the planet.

Why is a common reporting framework needed?

A common and consistent framework to report AI incidents and hazards can provide the necessary information for policymakers and organisations to learn from AI harms identified elsewhere in the world, thereby preventing similar incidents from occurring again. It could align AI incident reporting across jurisdictions before implementing AI incident reporting schemes. Pursuing reporting alignment is urgent, as a retroactive approach would prove costly and inefficient.

What are AI Incidents and AI hazards?

  • An AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms: (a) injury or harm to the health of a person or groups of people; (b) disruption of the management and operation of critical infrastructure; (c) violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights; (d) harm to property, communities or the environment. 
  • An AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident, i.e., any of the following harms: (a) injury or harm to the health of a person or groups of people; (b) disruption of the management and operation of critical infrastructure; (c) violations to human rights or a breach of obligations under applicable law intended to protect fundamental labour and intellectual property rights; (d) harm to property, communities or the environment.

What is in the framework?

The drafters reviewed four AI incident frameworks, identified 88 criteria to evaluate incidents or, in the case of product recalls, fault products.  These criteria were across the following 8 dimensions:

  1. Metadata dimension (9 criteria): Includes the incident’s title, description, and supporting material. 
  2. Harm details dimension (4 criteria): Describes the severity of the incident and the type of harm caused. 
  3. People and planet dimension (3 criteria): Covers affected stakeholders, associated AI principles, and violations of human rights. 
  4. Economic context dimension (4 criteria): Encompasses factors such as industry, business function, and impact on critical infrastructure. 
  5. Data and input dimension (1 criterion): Relates to the AI system’s training data. 
  6. AI model dimension (3 criteria): Indicates whether the incident is linked to the AI model or the interaction of multiple models. 
  7. Task and output dimension (2 criteria): Provides information on the task and autonomy level of the AI system. 
  8. Other information dimension (3 criteria): Allows submitters to provide additional incident details. Submitters affiliated to the organisation that developed or deployed the AI system can describe actions taken to cease, prevent or mitigate risks.

If you would like to discuss how current or future regulations impact what you do with AI, please contact Tom WhittakerBrian WongLucy PeglerMartin CookLiz Smith or any other member in our Technology team.

OECD (2025), “Towards a common reporting framework for AI incidents”, OECD Artificial Intelligence Papers, No. 34, OECD Publishing, Paris, https://doi.org/10.1787/f326d4ac-en.

Related sectors