De Nederlandsche Bank (DNB) and the Authority for the Financial Markets (AFM) have published a joint report regarding the impact of the use of artificial intelligence (AI) in the financial sector and regulatory oversight. This underscores the growing importance of AI in the financial sector and the regulatory authorities’ proactive approach in addressing its opportunities and challenges. The AFM and DNB’s report (in Dutch) can be found here. The key takeaways from the report are summarized below.

In their report, DNB and AFM highlight the long-standing use of AI by Dutch financial institutions and the ongoing experimentation with more advanced AI models. They emphasize the potential benefits of AI, such as improved fraud detection, enhanced customer service, and increased operational efficiency. However, the regulators also acknowledge the risks associated with AI, including concerns about data quality, privacy, and algorithmic bias.

Of particular note is the regulators’ commitment to ensuring responsible AI deployment by financial institutions. They stress that existing regulations apply to AI usage and emphasize the need for expanded regulatory oversight to assess the implications of AI on financial markets effectively. Moreover, DNB and AFM emphasize the importance of ongoing dialogue with the financial sector to address these challenges collaboratively. They have already engaged with various stakeholders and plan to continue these discussions through a symposium later this year followed by roundtable events.

Key takeaways from DNB and AFM’s report on AI in Dutch financial institutions:

  • Opportunities and risks: AI is gaining traction worldwide in the financial sector, including in the Netherlands. While AI presents substantial opportunities such as improved customer service and cost reduction, it also comes with risks, including those related to data quality, data protection, explainability, incorrect results, discrimination, and dependence on third parties.
  • Expectation of responsible AI use: Financial institutions are expected to deploy AI responsibly. Regulatory objectives and standards remain independent of the technology used, and existing regulations apply even when AI is employed. Supervision by AFM and DNB includes oversight of AI usage based on existing financial laws and regulations.
  • Implications for regulatory oversight: The use of AI necessitates an expansion of knowledge by AFM and DNB. Supervision methods and procedures may need to be developed or adjusted to account for new techniques. Supervision will focus on risk management, application modalities, and outcomes of AI deployment.
  • Additional requirements based on AI application: Depending on the application of AI, additional requirements may be necessary. While specific regulations for responsible AI usage are currently limited, as the importance of AI in the financial sector grows, regulatory frameworks need clarification or specification, preferably harmonized at EU level.
  • Balancing responsible AI and innovation: Regulatory frameworks should strike a balance between responsible AI deployment and fostering innovation.
  • EU AI-Act implications: The EU’s upcoming AI-Act designates certain AI systems, such as those used for credit assessments and insurance, as high-risk. This entails additional requirements for their development and responsible use. AFM and DNB support this approach and encourage compliance with high-risk AI system requirements.
  • Protection of fundamental rights: Financial institutions employing AI must ensure the protection of fundamental rights. For high-risk applications, institutions are obligated to assess potential impacts on the rights of individuals or groups.
  • Regulatory oversight responsibility: In principle the existing financial regulators, such as AFM and DNB in the Netherlands, will be tasked with overseeing compliance with the EU AI-Act in as far as it concerns AI deployed by financial institutions. Collaboration and coordination among regulators, both at the European and national levels, will be crucial given the unique risks associated with AI systems concerning fundamental rights.