On 16 April 2026, the Treasury Select Committee (TSC) published Reponses from HM Treasury (HMT), the Bank of England (BoE) and Financial Conduct Authority (FCA) (together the Regulators) to its report in relation to AI in Financial Services (the Report).

Background

In January 2026, the TSC published the Report in relation to the TSC’s core question in relation to AI in financial services as to whether the regulators are doing enough to manage the risks presented by this. The Regulators have each provided responses to this Report, which have now been published by the TSC.

Summary

Key points in HMT’s response to the Report include:

  • HMT shares the TSC’s view that the safe adoption of AI presents significant opportunities for consumers and the wider economy but that the associated risks need to be managed effectively.
  • Regulated firms are already required to manage technology-related risks to consumers and financial stability, including those arising from the use of AI, under existing rules.
  • HMT and the other Regulators are continually reviewing their approach as the technology develops.
  • The TSC recommended in its Report that HMT designate the major AI and cloud providers as Critical Third Parties (CTPs) by the end of 2026. HMT confirmed that it is in the process of gathering evidence to support decision-making in relation to potential designations and expects to make initial designation decisions this year.

Key points in the BoE’s response to the Report include:

  • The BoE highlighted that it agrees with TSC’s view that AI will have broad, complex, and likely long-term implications for how the UK financial system serves the real economy. However, it also emphasised that it does not agree with the TSC’s characterisation that the BoE is taking a ‘wait and see’ approach.
  • Given the BoE’s statutory objectives, it wants to create an environment in which responsible adoption of AI can thrive and contribute to financial sector innovation, competition, competitiveness, and growth, whilst safeguarding the integrity of the financial system.
  • In 2023, the BoE issued new Model Risk Management Principles for banks, which are technology-agnostic and outcomes-focussed but deliberately included factors relevant to the use of AI models and the BoE intends to build on this further in 2026.
  • The Prudential Regulation Authority has highlighted AI adoption in its 2026 supervisory priorities, meaning AI will be a key topic of exploration and scrutiny in its supervisory dialogues with firms.
  • The Financial Policy Committee (FPC) supports the BoE’s and the FCA’s initiatives to continue monitoring the adoption of AI by regulated firms and has asked them to undertake further work on agentic AI, focused on use cases in payments and financial markets.
  • All of this has been underpinned by the BoE’s active monitoring through ongoing engagement with practitioners and experts.
  • The BoE are also actively collaborating with domestic and international authorities on the use of AI in the financial system.
  • The BoE also uses AI, where appropriate, to support and enhance its own capabilities.
  • In relation to financial stability risks, the BoE will continue to monitor if debt financing of AI development increases as projected and concerns about potential disruption in risky credit markets, particularly in private credit where growing exposures to AI have contributed to concerns over asset quality and valuation uncertainty.
  • The BoE is pursuing work on simulation methods with international counterparts to better understand the conditions under which AI agents trading in financial markets could demonstrate correlated behaviour or ‘herding’ and so potentially exacerbate procyclical dynamics to amplify a stress scenario.
  • The BoE remain committed to delivering the CTP regime successfully and the FPC will continue to monitor its implementation and outcomes with a focus on the impact of the regime on reducing the systemic risks posed by CTPs.

Key points in the BoE’s response to the Report include:

  • Over the past 12 months the FCA considers that it has had extensive engagement with industry to understand how they are using AI, the uses and limitations of data, and the relative accuracy of various generative AI models.
  • The Supercharged Sandbox offers any financial services firm looking to innovate and experiment with AI, the possibility to do so using real-world datasets, accelerated computing, software and cloud capacity.
  • These innovation services are all underpinned by the FCA’s 2024 AI Update which set out how the regulatory framework applies to financial services firms’ use of AI, specifically the Consumer Duty and Senior Managers and Certification Regime.
  • The FCA welcomes the Government’s announcement that it will bring AI chatbots within scope of the Online Safety Act and will work closely with the Government and Ofcom, and other partners, to ensure this regulation is effective.
  • The FCA’s evidence base will be enhanced by the Mills Review, which will look at how retail financial services will be impacted by AI into the 2030s and beyond.
  • AI’s implications for the cyber resilience of the UK more broadly requires a system-wide response and the FCA sets out that it is actively exploring new ways to play a part, including in relation to the potential harm adversaries could cause using AI to target the UK’s cyber infrastructure.