Dutch financial institutions have been using artificial intelligence (AI) for some time and are experimenting with more advanced AI models suggesting that the use of AI will increase even more in the future. However, the Dutch regulators report that many institutions have indicated that they are reluctant to use generative AI for the time being, although they do see the potential. Some are slowly using it for supporting processes.

In 2019 the Dutch Central Bank (De Nederlandsche Bank, DNB) published a guidance document containing general principles for the use of AI in the financial sector. The document served as a discussion paper containing DNB’s preliminary views on the responsible use of AI in the financial sector with various issues and ideas outlined. Of particular importance were the set of general principles that the DNB formulated regarding the use of AI in the financial sector which were divided over six key aspects of responsible AI use: soundness, accountability, fairness, ethics, skills and transparency. Key points noted by the DNB in document included:

  • AI applications in the financial sector need to be reliable and accurate, behave predictably, and operate within the boundaries of applicable rules and regulations;
  • financial undertakings need to be accountable for the use of AI applications (AI applications that do not function right could result in damages for the financial undertaking, its customers and other stakeholders);
  • AI applications should not inadvertently disadvantage certain groups of customers;
  • all people in a financial undertaking (work floor to the board room) should have a sufficient understanding of the strengths and limitations of the AI-enabled systems that they work with; and
  • financial undertakings need to be able to explain how and why they use AI in their business processes and (where reasonably appropriate) how these applications function.

In April the DNB combined with the Dutch markets regulator, the Dutch Authority for the Financial Markets (AFM), and published a joint report regarding the impact of the use of AI in the financial sector and regulatory oversight. Apart from highlighting the potential benefits and risks of AI which have been well rehearsed in other jurisdictions the Dutch regulators emphasised the importance of on-going dialogue with the financial sector (a symposium is scheduled for later this year followed by roundtable events) and the need for expanding their regulatory oversight to assess the implications of AI. There were also a couple of other key take-aways which are worth noting.

First, the DNB and the AFM provided a clear message that financial institutions are expected to deploy AI responsibly. Dutch regulatory objectives and standards remain independent of the technology used, and existing regulations apply when AI is employed.

Second, the DNB and AFM recognise the need to develop their supervisory methods and procedures focussing on risk management, application modalities and outcomes of AI deployment. And finally, as the importance of AI in the financial sector grows increasing the need for further clarification and specification, the Dutch regulators would prefer those regulatory requirements be harmonised at the EU level. As for the upcoming AI Act which designates certain AI systems, such as those used for credit assessments and insurance, as high-risk, the AFM and the DNB will be tasked with overseeing the AI deployed by Dutch financial institutions in collaboration with other Member State regulators and European supervisors. The introduction of the AI Act will require financial institutions that apply AI to pay extra attention to the proper protection of fundamental rights.