On 2 April 2026, the Financial Ombudsman Service (FOS) published its response to the Financial Conduct Authority’s (FCA) review into the long-term impact of AI on retail financial services (Mills Review).

Key points in the response includes:

  • From early small sample analysis, the FOS estimates that AI is likely to have contributed to up to a third (35%) of responses to initial assessments. The use of AI in some of these sample cases may have helped consumers form more coherent, well-structured arguments. However, the incorrect or excessive use of generative AI can lead to a disproportionate amount of caseworker time being spent verifying the content’s accuracy and considering disproportionate escalations to an ombudsman, which runs contrary to the FOS’ aims of delivering a quick and informal resolution service.
  • The FOS have also seen evidence of professional representatives using AI to make submissions to it. This presents the same challenges as when a consumer uses generative AI to submit a complaint. The FOS have seen examples of submissions from professional representatives that can run to approximately 200 pages in response to a six-to-eight-page provisional decision, and which contain multiple inaccuracies.
  • At present, the FOS is receiving very few complaints about a firm’s use of AI.
  • The FOS supports transparency being at the centre of AI adoption by financial firms. As firms adopt more autonomous decision making, the FOS would encourage the FCA to set clear expectations for regulated firms to provide the FOS and the consumer with a clear rationale on how AI contributed to an outcome, as well as being able to explain how the decisions align with principles-based regulations, such as the Consumer Duty.
  • The FOS would welcome from the FCA clarity on expectations for record keeping, paths to human escalation, and dispute handling where no human is involved – such as where a consumer’s AI agent may interact directly with a firm’s AI chatbot.