This article was originally published in the Society for Computers and Law journal August 2020.
AI on the rise
Companies are increasingly turning to AI to grow and improve their businesses. This is hardly surprising, given the potential bounty AI promises. Research by the management consultants McKinsey & Company suggests that AI could deliver additional global economic output of $13 trillion per year by 2030[1]. And there are reports which suggest that the appetite of some businesses to adopt AI has been accelerated by the COVID-19 crisis.[2]
With the potential for substantial rewards comes the potential for substantial risk. As is often the case with new technology, the nature and extent of the risk is not fully understood and may only become clearer as AI is adopted. However, as McKinsey puts it, 'both sides of the AI blade are far sharper' than previous technological developments[3]. This is because AI will, potentially, see a huge step change in the extent to which businesses ‘let go’ of decision making processes and entrust them to technology. This creates the potential for mistakes or failures to occur before a business can stop them or even without the business being aware there is a problem.
The potential risks are myriad, depending on the nature of the business and the nature of the AI being used. 21st century risks have already been identified: privacy and data protection violations, discriminatory decision making, political manipulation (such as Cambridge Analytica). But more traditional economic and casualty risks need to be considered:
Systemic risk
(high frequency, low individual impact)
|
A bank uses AI to calculate mortgage repayments. Due to a programming error all customers are over-charged by 0.2 percent each month for 2 years before the issue is identified.
|
Catastrophic risk
(low frequency, high individual impact)
|
AI used to assist with air traffic control fails, causing a passenger aircraft to crash.
|
Unmanageable risk
(high frequency, high individual impact)
|
An AI medical algorithm used in diagnostics goes wrong, resulting in repeated missed diagnoses and incorrect prescriptions, in turn leading to widespread personal injury and loss of life
|
Systemic and catastrophic risks are, ordinarily, ones that can be managed and mitigated using conventional techniques (such as the use of back-up systems, checks and balances, and insurance) but unmanageable risks are usually fatal to a business. It is essential, therefore, that businesses triage their AI risks so that they can assess whether the risk is manageable and, if so, what the best risk management strategy is.
Why might AI fail?
A detailed exploration of this question would fill pages and is best reserved for a separate article. However, it is fairly easy to see how failure could result from:
- errors in the procurement, development and implementation of an AI programme;
- errors in the composition of datasets the AI relies on;
- a breakdown at the human / AI interface – perhaps where an AI system is used to augment human decision making processes; or
- cyber-attack or software corruption.
A further consideration is the ‘Black Swan’ problem. Black Swan events are outlier phenomena that may produce data outside the ranges anticipated at the time the AI was designed. AI may struggle to respond to a Black Swan event if it has been trained on data that does not include, and has therefore not 'learned' from, the range of data that a Black Swan event can produce. For example, the data sets that AI are trained on may be limited in time or scope and not cover the data produced by the Black Swan that the Al will have to analyse. Also, the AI may not have been designed to respond to data caused by Black Swan events which are outside the ranges anticipated. For example, the AI may continue to make decisions during a Black Swan albeit those are decisions which are outside the range of what was expected.
We should be clear that we are not advocating against the use of AI – rather that these issues demonstrate why businesses need to ensure that they are investing appropriate time and resources to ensure that AI is fit for purpose and appropriately risk-managed.
Why is this a risk for directors and officers?
Directors’ duties
Directors and officers will no doubt be aware of the duties they owe to the company. In the present context, the two most relevant are:
- To promote the success of the company for the benefit of its members. This duty is primarily aimed at enhancing shareholder value. However, in 2010 this concept was widened to require directors to take a more holistic view to embrace what the Government called ‘responsible business behaviour’. This introduced 6 factors that corporate directors need to have regard to. Most relevant to the present debate are: (i); the long term impact of decisions; (ii) the interests of employees; (iii) the need to foster customer and supplier relationships; (iv) the desirability of maintaining a reputation for high standards of business conduct; and (v) the impact on the environment and the wider community.
- To exercise reasonable care, skill and diligence. This duty requires directors to have the knowledge, skill and experience that would reasonably be expected of anyone doing that particular job. In addition, a director has to perform according to the knowledge, skill and experience they actually have – so if they have a particular skill set in, say accounting, their competence will be judged against what would be expected of someone who had that skill set.
It does not take a huge leap to see how these duties would be engaged where a company is using AI, prompting directors to ask the following questions:
- Is a new piece of AI software fit for purpose such that it will enhance the value of the business?
- Does it adequately protect our employees’ and customers’ personal data and confidential information?
- Do we have a system of checks and safeguards in place to identify and learn from instances where the AI fails to perform as expected?
- Do we have a risk mitigation / disaster recovery plan in place in the event of catastrophic or systemic failure?
It is not incumbent on directors to find the answers to these types of questions themselves – the need to employ, and delegate to, appropriately skilled employees, contractors and advisers to address these issues will be inevitable in the vast majority of businesses. However, it will not be enough for a board to simply label AI as an ‘IT matter’ and hive it off for the IT manager to deal with.
The breadth of AI’s application, and the level of potential risk it carries, requires boards to look at the matter more holistically, taking into account all the different areas of the business that could be impacted. By way of example, if a company is intending to procure a piece of AI software to support its online sales capability, one might expect to see engagement by the procurement, IT, sales, information / cyber security and risk divisions of the business to assess how the AI should be designed, developed and implemented. The role of the board directors will be to draw together the views expressed by these parts of the business and take a balanced view on whether, and if so how, to proceed with the proposed solution.
What might happen if directors fail to fulfil their duties in relation to the use of AI? In principle, they could be liable to a claim for damages. This would most likely come about where there has been a material failure of the AI and it has caused the company loss. The claim could come from the company itself, a liquidator or administrator (if the company has been tipped into insolvency) or from the shareholders by way of a derivative action[4]. The latter type of claim should not be underestimated, particularly by directors of listed companies. Derivative claims are a relatively new concept but they are gaining increasing popularity as a way for shareholders (both institutional and private) to bring class actions against the company and its directors to seek redress where there has been a meaningful loss of shareholder value.
In practice these types of civil claim will, hopefully, not arise too frequently but when they do they pose a substantial threat to directors – both financial and reputational.
Regulatory risk
It is cliché to say we live in the age of regulation and this is no exception when it comes to AI. While the UK has no overall Regulator of AI [5], its potential scope (and the industries employing it) means that it has inevitably caught the attention of many regulators. For example, the Information Commissioners’ Office, the Financial Conduct Authority and the Competition and Markets Authority are all actively looking at the impact of AI on the areas they regulate and we anticipate that this trend will continue with other regulators. For example, as AI is deployed in safety critical functions it seems inevitable that the Health and Safety Executive will need to develop a regulatory strategy towards it.
As a consequence, businesses can expect regulators to take an increasing interest in how they, and their corporate governance, are using and managing AI. In the event of an AI failure, intervention by a regulator may be inevitable. From the directors’ own point of view, this could result in regulatory action taken against them personally. For example, the FCA may take action against directors of regulated businesses if it considers that they are responsible for an AI failure that has resulted in consumer detriment (such as the mortgage repayment miscalculation example cited above).
Even if a director is not personally in the firing line, regulatory investigations often require directors and officers to act as witnesses and to devote substantial management time to managing the investigation.
In a similar vein, it is clearly foreseeable that an AI failure event could result in corporate officers being required to appear before Public Inquiries or Government Select Committees. Although not on AI matters, the adverse publicity created for Cambridge Analytica and News International from its owners’ and directors’ appearances in front of Select Committees and Public Enquiries were, arguably, more damaging than any litigious or regulatory action than followed. Appearing before such bodies is taxing – both emotionally and financially – and the effects should not be underestimated.
We should stress that we are not predicting that the directors of businesses using AI are opening the floodgates to a world of unbearable scrutiny. Businesses and their directors regularly deal with new and emerging risks every year and, for the most part, are relatively untouched by the types of issues, intervention and oversight discussed above.
However, boards need to recognise that the issues raised by AI mean there is an elevated risk of this kind of intervention if the AI their business uses fails. AI is a new, and largely untested, technology, the risks it presents are potentially very large, and those risks are at the front of the minds of Governments and Regulators.
As a consequence, it is important that directors prioritise AI appropriately so that it receives a level of consideration commensurate with the risks and rewards it offers to the business – both for the benefit of the business and its stakeholders and for the directors’ own benefit.
This article was written by Matthew Walker and Tom Whittaker.
Matthew Walker is a Partner in Burges Salmon’s dispute resolution unit. He has broad experience of handling complex multi-party disputes and regulatory investigations.
Tom Whittaker is a director and solicitor advocate in Burges Salmon’s dispute resolution unit.
[1] https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-modeling-the-impact-of-ai-on-the-world-economy
[2] https://www.ey.com/en_uk/ccb/how-do-you-find-clarity-in-the-midst-of-covid-19-crisis
[3] https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/confronting-the-risks-of-artificial-intelligence
[4] This is a process that allows shareholders to bring claims against directors on behalf of the company.
[5] The Departments for Digital, Culture, Media & Sport and for Business, Energy & Industrial Strategy have set up an Office for Artificial Intelligence, which is responsible for overseeing the Government’s AI and Data Grand Challenge. However, it has no regulatory oversight of how businesses use AI.