On 20 January 2026, the House of Commons Treasury Committee (Committee) published a report on artificial intelligence (AI) in financial services.
Background
On 3 February 2025, the Committee launched an inquiry to examine the opportunities and risks posed by AI for the UK financial services sector. In response to its call for evidence, the Committee received 84 written submissions and correspondence from 6 major AI and cloud providers focusing on AI’s impact on financial services consumers and financial stability. The inquiry revealed significant risks to consumers and financial stability to which the Committee’s core question in this report was whether the regulators are doing enough to manage the risks presented by AI in financial services.
Overview
The Committee’s report highlights the following concerns raised by industry stakeholders and the regulators’ responses in connection to these concerns.
In particular:
- Regulatory Framework and approach: Both the Financial Conduct Authority (FCA) and Prudential Regulation Authority (PRA) told the Committee that the regulatory framework offers sufficient protection for consumers and financial stability against the risks posed by AI. The regulators further mentioned taking a reactive approach, dealing with the impact of situations related to AI when they occur.
- Consumers: The Committee mentioned it received a significant volume of evidence about AI’s risk to financial services consumers, specifically in relation to the lack of transparency in AI-driven decision making in credit and insurance, how AI financial decision-making and AI-enabled product tailoring threatens financial exclusion for the most disadvantaged consumers, the risk of unregulated financial advice from AI search engines on misleading consumers and how AI usage may result in an increase in fraud. In response to these risks, the report mentions the work the FCA has been doing in relation to monitoring such as setting up the AI Consortium and a periodic AI survey in collaboration with the Bank of England (BoE). The report further mentions the FCA’s implementation of preventative measures on AI financial product safety through its launch of the AI Live Testing service alongside its new Supercharged Sandbox for firms to experiment with AI solutions pre-deployment which the report noted is still limited to a small group of firms.
- Regulatory clarity: According to the report, although the FCA and Information Commissioner’s Office announced in June 2025 to create a joint statutory code of practice for firms developing or deploying AI for automated decision-making, many stakeholders commented on the lack of clarity across the current regulatory framework, in particular, on the expectations under the Senior Managers & Certification Regime (SM&CR) in the context of AI. The FCA acknowledged that firms have concerns about accountability for harm caused to consumers through the use of AI yet mentioned that clear lines of accountability must be established when AI systems produce harmful or unfair outcomes. David Geale, the FCA’s Executive Director for Payments and Digital Finance, told the Committee that senior managers should demonstrate they understood and controlled risks within their areas of responsibility which can be captured under the framework without the need for a new senior manager function.
- Financial Stability: The report highlights that stakeholders mentioned the heightened risks of cyber-security vulnerabilities, the UK’s over-reliance on overseas AI and cloud services which threaten the sector’s operational resilience and the potential of AI-driven market trading amplifying herd behaviour that could risk financial stability or worst-case scenario cause a financial crisis. The report noted that the BoE and FCA do not conduct AI-specific cyber or market stress testing and since the establishment of the Critical Third Parties Regime in 2024 to tackle the independence of AI and cloud services, no progress has been made on designating a major cloud and AI provider to the regime with no further information provided from HM Treasury (HMT) on the process.
Conclusion and recommendations
The report mentioned that the FCA, BoE and HMT are not doing enough to manage the risks presented by AI, providing the following recommendations for regulators:
- The FCA must provide the financial services sector with greater clarity on the application of existing rules to the use of AI. By the end of 2026, the FCA should publish comprehensive and practical for firms on: (i) the application of existing consumer protection rules to their use of AI and (ii) the accountability and the level of assurance expected from senior managers under the Senior Managers Regime for harm caused through the use of AI.
- For the FCA and BoE to conduct AI-specific stress testing.
- By the end of 2026, HMT must designate the major AI and cloud providers as critical third parties for the purposes of the Critical Third Parties Regime.