Algorithmic decision-making in the public sector: lessons learned from cancelled projects

This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
The Data Justice Lab at Cardiff University has produced the "the first comprehensive overview of [automated decision-making systems in public services] being cancelled across western democracies", giving the reasons for those cancellations and taking learnings from them.
The report is the outcome of research concerning paused or cancelled government automated systems from the UK, Australia, Canada, Europe, New Zealand and the U.S., in areas including fraud detection, child welfare and policing.
The report is important:
Here we draw out some key points relating to the when, why, who and what?
Out of the 61 ADS cancellations studied, they failed at the following stages:
Stage of ADS failure | # |
Development / investigatory stage | 3 |
After Pilot / Testing | 9 |
After implementation / use | 31 |
Pre-emptive ban / moratorium | 18 |
Identifying the precise stage when a system failed is not straightforward. For example, the report expects that many of the pre-emptive ban / moratorium cases related to facial recognition for which a moratorium had been imposed but the systems had been trialled. This is a useful example of the difficulties of objectively monitoring ADS and AI across jurisdictions and contexts, and where information is limited.
They failed for the following reasons:
Reason for ADS failure | # |
Government agency decision - effectiveness | 31 |
Civil society critique or protest | 26 |
Critical media investigation | 24 |
Legal action | 19 |
Government concern - privacy, fairness, bias, discrimination | 13 |
Critical government review | 12 |
Political intervention | 8 |
Government decision - procurement, ownership | 6 |
Other | 5 |
Corporate decision to cancel availability of system | 3 |
Sometimes there were multiple reasons for an ADS' failure. Those reasons may be in parallel or sequential. But they played a part, suggesting that the development of trustworthy ADS (and AI) must rely on various components which are symbiotic; no single policy measure will be a panacea.
Take an example: critical media investigation was "responsible for identifying trials or implemented systems whose existence was not widely known until reported. In this way, media coverage is playing a significant role in rendering visible the systems and their impact on people." Presumably, that provided the transparency needed for civil society critique which took the form of "community organisations raising concerns and research outputs that raised concerns about the impact of ADS".
As expected, ADS stakeholders are various and multiple:
The report identifies "10 recommendations which we believe are necessary to improve the landscape, culture and context of ADS use in the UK at local and national level", as follows:
ADS (and AI) are dependent upon context, but often raise important legal, technical and social issues which are relevant to other ADS and in different contexts. Many of these recommendations are already being developed in other jurisdictions and contexts; that reflects the global (and universal) nature of the issues. The report provides a wealth of references to how and where these recommendations are being called for and implemented. But given the growing use of ADS globally, the report inevitably cannot be a complete sourcebook. So all stakeholders - including ADS developers and wider industry, regulators and local and national governments - need to keep up to date with developments, including learning from failed ADS projects.
Knowing what's relevant will involve considering the specific ADS or AI in question and the context and then applying it to the facts. As far as we know, there's no way to automate that completely. So, if you would like to discuss how you procure, develop and deploy ADS/AI, please contact Tom Whittaker or Martin Cook.
This article was written by Tom Whittaker and Trulie Taylor.
The Data Justice Lab has researched how public services are increasingly automated and government institutions at different levels are using data systems and AI. However, our latest report, Automating Public Services: Learning from Cancelled Systems, looks at another current development: The cancellation of automated decision-making systems (ADS) that did not fulfil their goals, led to serious harm, or met caused significant opposition through community mobilization, investigative reporting, or legal action. The report provides the first comprehensive overview of systems being cancelled across western democracies.
https://datajusticelab.org/2022/09/23/new-research-report-learning-from-cancelled-systems/