On 26 October 2023, the PRA published feedback statement FS2/23: Artificial intelligence (AI) and machine learning.

FS2/23 provides a summary of the responses received to the PRA and FCA’s joint discussion paper 5/22 on AI and machine learning (DP5/22), which was published in October 2022 in order to further their understanding and to deepen dialogue on how AI may affect their respective objectives for the prudential and conduct supervision of firms.

The FS aims to acknowledge responses to DP5/22, identify themes and provide an overall summary in an anonymised way. It does not include policy proposals or signal how the supervisory authorities are considering clarifying, designing and/or implementing current or future regulatory proposals on this topic.

The key points made by respondents to DP5/22 included:

  • A regulatory definition of AI would not be useful. Many respondents pointed to the use of alternative, principles-based or risk-based approaches to the definition of AI with a focus on specific characteristics of AI or risks posed or amplified by AI.
  • As with other evolving technologies, AI capabilities change rapidly. Regulators could respond by designing and maintaining ‘live’ regulatory guidance, i.e. periodically updated guidance and examples of best practice.
  • Ongoing industry engagement is important. Initiatives such as the AI Public Private Forum have been useful and could serve as templates for ongoing public-private engagement.
  • Respondents considered that the regulatory landscape is complex and fragmented with respect to AI. More coordination and alignment between regulators, domestic and international, would therefore be helpful.
  • Most respondents said that data regulation in particular is fragmented, and that more regulatory alignment would be useful in addressing data risks, especially those related to fairness, bias, and management of protected characteristics.
  • A key focus of regulation and supervision should be on consumer outcomes, especially with respect to ensuring fairness and other ethical dimensions.
  • Increasing use of third-party models and data is a concern and an area where more regulatory guidance would be helpful.
  • AI systems can be complex and involve many areas across the firm. Therefore, a joined-up approach across business units and functions could be helpful to mitigate AI risks. In particular, closer collaboration between data management and model risk management teams would be beneficial.
  • Respondents said that existing firm governance structures (and regulatory frameworks such as the Senior Managers and Certification Regime) are sufficient to address AI risks.