On 31 October 2024, Sarah Breeden (Deputy Governor, Financial Stability, Bank of England (BoE)) gave a speech at the HKMA-BIS Joint Conference on Opportunities and Challenges of Emerging Technologies in the Financial Ecosystem. In this speech, Ms Breeden explores the novel features of Generative AI (GenAI), and how we can uphold financial stability whilst harnessing its potential benefits for economic growth.
Two issues to keep a watchful eye on
Whilst the financial services industry is still in the early stages of adopting GenAI Ms Breeden warns that a ‘watchful eye’ should be kept on two issues:
- Central banks and financial regulators should continue to assure themselves that technology-agnostic regulatory frameworks are sufficient to mitigate the financial stability risks from AI, as models become ever more powerful and adoption increases. In addition, regulators need to ensure that managers of financial firms are able to understand and manage what their AI models are doing as they evolve.
- That regulatory perimeters should be kept under review, should the financial system become more dependent on shared AI technology and infrastructure systems. For example, stress testing frameworks could evolve to assess whether AI models used in the front line of financial firms’ business could interact with each other in ways that are hard to predict ex ante.
AI Consortium
Ms Breeden refers to the AI Consortium that the BoE is launching so as to better understand the different approaches firms are taking to managing AI risks which could amount to financial stability risks.
The Financial Policy Committee is also publishing its assessment of AI’s impact on financial stability which will set out how it will monitor the evolution of those risks going forward.
The use of AI in financial services
Ms Breeden discusses the latest BoE and Financial Conduct Authority (FCA) periodic survey of how financial services firms in the UK are using AI and machine learning. In particular, the survey found that 75% of the firms are already using some form of AI in their operations, including all of the large UK and international banks, insurers and asset managers that responded. That’s up from 53% in 2022. 41% of respondents are using AI to optimise internal processes, while 26% are using AI to enhance customer support, helping to improve efficiency and productivity. Many firms are also using AI to mitigate the external risks they face from cyber-attack (37%), fraud (33%) and money laundering (20%).
Another striking finding in the survey is that only a third of respondents describe themselves as having a complete understanding of the AI technologies they had implemented in their firms. As firms increasingly consider using AI in higher impact areas of their businesses, regulators should expect a stronger, more rigorous degree of oversight and challenge by their management and boards – in particular, given AI’s autonomy, dynamism and lack of explainability.
Foundation models
Ms Breeden notes that it plausible that there could be widespread use of common foundation models, upon which downstream applications are dependent, not just across the financial system, but across the economy, and across borders. She warns that this introduces macro fragilities: an incident with a base model or a cloud provider supporting it could have systemwide implications. Or common models could increase the risks of correlated responses by market participants to shocks which amplify stress.