On 24 February 2025, the Financial Conduct Authority (FCA) published the latest research note in its artificial intelligence (AI) series, asking ‘how can AI’s role in credit decisions be explained?’.

The note explores the relative effectiveness of different methods for explaining the outputs of AI to consumers in the context of the use of determining consumers’ creditworthiness.

The FCA explains that it tested whether participants were able to identify errors caused either by incorrect data used by a credit scoring algorithm or by flaws in the algorithm’s decision logic itself. It found that the method of explaining algorithm-assisted decisions significantly impacted participants’ ability to judge these decisions; however the impact of the explanations it tested varied depending on the type of error, in ways that were not anticipated. The research notes proposes two hypotheses to explain the inconsistent effects of the explanation genres.

Discussing its findings, the FCA notes that they:

  • Reiterate the value of testing accompanying materials that may be provided to consumers when explaining AI, machine learning and/or algorithmic decision-making. 
  • Underscore the importance of testing consumers’ decision-making within the relevant context, rather than relying solely on self-reported attitudes.

The FCA flags that future research could look to explore how to best explain AI assisted decisions in other contexts within financial services, the specific mechanisms for how explainability methods may impact consumers, alternative ways of presenting explanation genres, and the broader consumer journey beyond recognising errors.