Different regulators are approaching artificial intelligence (AI) in different ways. The Financial Conduct Authority (FCA) has concluded that its existing rules and guidance provide a sufficient framework for managing risk and therefore is not planning to introduce new regulations specific to AI at present. It also believes this is the best way to support UK growth and competitiveness, which it is further supporting by showcasing real world solutions in its AI Spotlight, facilitating testing of models during the product development phase in its AI Lab and supporting the build of proof of concepts by supercharging its Digital Sandbox with more data, computing power and tooling.

In the EU, where the AI Act sets out rules and obligations specific to AI, Member States continue to set up their supervisory structures for enforcement.  At this point, there is uncertainty over whether the application date of the high-risk provisions will be pushed back, though some provisions – prohibitions and obligations for general-purpose AI models – already apply.  In the meantime, data protection regulators have emphasised that they are able to take enforcement action under existing data protection law and continue to take a leading role in regulating AI. The UK’s Information Commissioners Office has published some of the most comprehensive guidance on data protection considerations for AI use and development. It has recently published a report on agentic AI and is expected to publish an update on automated decision-making shortly. It is also developing a code of practice on AI and automated decision-making.

We are also seeing some regulators adopting AI use for their own supervisory tasks. The FCA also continues to explore ways it can use AI to help it become a smarter regulator, which is part of its strategy. It uses predictive AI to assist with real-time knowledge, an AI bot to point consumer queries to the complaints or compensation services where appropriate, and it is experimenting with large language models to make certain processes more efficient.

The Bank of England and Prudential Regulation Authority use AI for multiple purposes including research, communications to firms, forecasting and management of both structured and unstructured data. They are developing tailored AI solutions and are setting up an enterprise data platform on the cloud to support AI applications, as well as broadening the AI skills of their staff.  

Both the use of AI and the use cases in firms, particularly in back and middle office settings, have multiplied over the last year and will continue to do so. Examples of transcription, summarisation, triage, information extraction, coding of internal processes and workflow management span sectors and departments. There are also a developing range and sophistication of uses in compliance, although support in financial crime processes like gathering data for know-your-customer, monitoring for potential anti-money laundering, market abuse and fraud and cyber-attack mitigation are still favourites.

However, it is definitely not just internal functions, as we are seeing AI being used increasingly in customer-facing support functions, such as chatbots to handle queries and provide information. This looks set to continue into more personalised services using customer data and even giving simple forms of advice.   

There is lots of talk about agentic AI, even in financial services, but the current use cases tend to involve human oversight and we would expect firms to continue to be cautious about increasing autonomy, although we anticipate this will develop over time.