On 31 October 2024, the Bank of England (BoE) published a speech by Sarah Breeden (Deputy Governor, Financial Stability) at the HKMA-BIS Joint Conference on Opportunities and Challenges of Emerging Technologies in the Financial Ecosystem.

In this speech, Ms Breeden explores the novel features of Generative AI (GenAI), and how we can uphold financial stability whilst harnessing its potential benefits for economic growth.

Potential benefits

The speech begins by noting that GenAI is expected to bring considerable potential benefits for productivity and growth in the financial sector and the rest of the economy. But for the financial sector to harness those benefits financial regulators must have policy frameworks that are designed to manage any risks to financial stability that come with them. Economic stability underpins growth and prosperity. It would be self-defeating to allow GenAI to undermine it.

Two issues to keep an eye on

Whilst the financial services industry is still in the early stages of adopting GenAI Ms Breeden states that regulators should keep a ‘watchful eye’ on two issues:

  • At the micro-prudential level, central banks and financial regulators should continue to assure themselves that technology-agnostic regulatory frameworks are sufficient to mitigate the financial stability risks from GenAI, as models become ever more powerful and adoption increases.
  • At the macro-prudential level, to be aware of the possible need for intervention to support the stability of the financial system as a whole. Regulatory perimeters should be kept under review should the financial system become more dependent on shared AI technology and infrastructure systems.

AI Consortium

Ms Breeden refers to the AI Consortium that the BoE is launching to help further understanding about AI’s potential benefits and the different approaches firms are taking to managing those risks which could amount to financial stability risks.  The Financial Policy Committee (FPC) will publish its assessment of AI’s impact on financial stability and set out how it will monitor the evolution of those risks going forward

AI in financial services

Ms Breeden then refers to the results of a periodic survey that the BoE issued regarding the use of AI in financial services:

  • 75% of firms surveyed are using some form of AI in their operations, including all of the large UK and international banks, insurers and asset managers.
  • 41% of respondents are using AI to optimise internal processes, while 26% are using AI to enhance customer support, helping to improve efficiency and productivity.
  • 16% of firms are using AI for credit risk assessment, and a further 19% are planning to do so over the next three years.
  • 11% are using AI for algorithmic trading, with a further 9% planning to do so in the next three years.
  • 4% of firms are already using AI for capital management, and a further 10% are planning to use it in the next three years.
  • Many firms are using AI to mitigate the external risks they face from cyber-attack (37%), fraud (33%) and money laundering (20%).

What might AI mean for micro-prudential supervision?

Ms Breeden discusses what AI might mean for micro-prudential supervision and the Discussion Paper that the BoE and the Financial Conduct Authority issued in 2022 (DP5/22). She notes that respondents to the Discussion Paper highlighted the risk that the model risk management principles the regulators set outmight not be sufficient to ensure model users fully understand the third-party AI models they deploy within their firms. And so, regulators, need to consider what explainability means in the context of generative AI, what controls they should expect firms to have and what that means for regulatory and supervisory frameworks.

Ms Breeden also states that feedback to the Discussion Paper also noted the lack of clear, widely applicable standards around the data which AI models are trained on. Only a third of respondents described themselves as having a complete understanding of the AI technologies they had implemented in their firms. That said, as firms increasingly consider using AI in higher impact areas of their businesses such as credit risk assessment, capital management and algorithmic trading, they should expect a stronger, more rigorous degree of oversight and challenge by their management and boards.

Ms Breeden added that respondents agreed that practical guidance would be helpful on what ‘reasonable steps’ senior management might be expected to take with respect to AI systems to comply with regulatory requirements.

What might AI mean for macro- prudential policy?

Ms Breeden advises that an issue the BoE worries about all the time as macro-prudential policymakers is interconnectedness – where the actions of one institution can affect others, firms can become critical nodes, and firms can be exposed to common weaknesses. AI could both increase such interconnectedness and increase the probability that existing levels of interconnectedness threaten financial stability. AI could also increase the probability of existing interconnectedness turning into financial stability risk – in particular through cyber-attacks. But it could also aid cyber attackers – for example through deepfakes created by GenAI to increase the sophistication of phishing attacks.

Ms Breeden also highlights the potential for system-wide conduct risk. If AI determines outcomes and makes decisions, what would be the consequences if, after a few years, such outcomes and decisions were legally challenged, with mass redress needed?