On 21 November 2024, the Bank of England (BoE) and Financial Conduct Authority (FCA) published a report setting out the results of their third survey of artificial intelligence (AI) and machine learning in UK financial services.

Background

In the report, the BoE and FCA highlight the increasing use of AI in UK financial services over the past few years and the fact that, while it has many benefits, AI can also present challenges to the safety and soundness of firms, the fair treatment of consumers, and the stability of the financial system. In light of these challenges, the regulators note the need for them to maintain an understanding of the capabilities, development, deployment and use of AI in UK financial services.

The survey was carried out to build on existing work to further the BoE’s and FCA’s understanding of AI in financial services, continuing the previous two surveys (in 2019 and 2022) by providing ongoing insight and analysis into AI use by BoE and/or FCA-regulated firms. The 2024 survey also incorporated questions relating to generative AI, given its growth since the 2022 survey.

Findings

In the report, the BoE and FCA set out their findings on a number of key topics, including:

  • Use and adoption of AI: The report flags that 75% of firms are already using AI, with a further 10% planning to use AI over the next three years. Foundation models were found to form 17% of all AI use cases, supporting anecdotal evidence for the rapid adoption of this complex type of machine learning.
  • Third-party exposure: The survey found that a third of all use cases are third-party implementations, supporting the view that third-party exposure will continue to increase as the complexity of models increases and outsourcing costs decrease.
  • Automated decision-making: 55% of all AI use cases were found to have some degree of automated decision-making, with 24% of those being semi-autonomous (i.e. although they can make a range of decisions on their own, they are designed to involve human oversight for critical or ambiguous decisions). Only 2% of use cases have fully autonomous decision-making, according to the report.
  • Materiality: Of all AI use cases, 62% are rated low materiality by the firms that use them, while 16% are rated high materiality.
  • Benefits and risks of AI: The report notes that the highest perceived current benefits are in data and analytical insights, anti-money laundering (AML) and combating fraud, and cybersecurity. In terms of risks, 4 of the top 5 perceived current risks are related to data, and the risks expected to increase the most over the next three years are third-party dependencies, model complexity, and embedded or ‘hidden’ models. Cybersecurity is rated as the highest perceived systemic risk both currently and in three years, with critical third-party dependencies causing the largest increase in systemic risk.
  • Constraints: Data protection and privacy is the largest perceived regulatory constraint to the use of AI, followed by resilience, cybersecurity and third-party rules, and the FCA’s Consumer Duty. Safety, security and robustness of AI pose the largest perceived non-regulatory constraint.
  • Governance and accountability: The survey found that 84% of firms reported having an accountable person for their AI framework, with 72% of firms saying that their executive leadership were accountable for AI use cases.