Update
30.11.2023
AI is a hot topic in the financial sector, with legislatures, regulators and financial institutions themselves, especially due to the popularity of solutions like ChatGPT. In this series of Q&AIs, we discuss what financial institutions need to consider when using AI in the performance of their regulated activities and developing AI solutions for that purpose.

Although the EU AI Act is still under construction, EU and Dutch regulators point out that existing regulatory frameworks already impose standards on financial institutions when they use and/or develop AI systems. They have also published reports, principles and other guidance on the use and development of AI systems. In this series of Q&AIs, we will focus on the standards arising from the existing regulatory framework. In this respect, we discuss the use of AI in the context of third-party service providers and management of ICT risks. Furthermore, we set out how the use of AI affects the product distribution chain and client relationship, including client due diligence and transaction monitoring. Finally, we discuss what the AI Act means for financial institutions.

  • The complete Q&AI is available here

    Question 1. May financial institutions use AI when performing regulated activities?
    Financial institutions may use AI in relation to regulated activities. While AI may enable them to enhance their business processes, it also has the potential to cause incidents that could harm a financial institution and/or its clients. Therefore, the use of AI should be in compliance with existing regulatory requirements. Both EU regulators and Dutch regulators – the Dutch Central Bank (DNB) and the Netherlands Authority for the Financial Markets (AFM) – have published reports, principles and other guidance on the use of AI by financial institutions.

    DNB, for instance, issued General Principles For The Use Of Artificial Intelligence In The Financial Sector in July 2019. Soundness, accountability, fairness, ethics, skills and transparency (or ‘SAFEST’) form a framework within which financial institutions can responsibly shape the deployment of AI.The AFM acknowledges the use of AI in, amongst others, its supervision forecast report ‘Trendzicht 2024’ of November 2023. The AFM stipulates that the use of use of AI can contribute to efficiency in the financial sector. However, besides the positive effects of increased supply and diversity of providers, the digitalisation of financial markets leads to new risks, for example in case of the uncontrolled use of AI in advising on and distributing financial products.

    Specifically for (re)insurers, EIOPA has also published Artificial Intelligence Governance Principles in June 2021 that EIOPA plans to update. EIOPA has also announced in its working programme that it will develop a sound regime for the use of AI by the insurance sector, complementary to the AI Act. In these reports and principles, the regulators stress that the existing regulatory framework also applies to the use of AI by financial institutions. Examples of relevant regulatory requirements in the use of AI include: requirements relating to ethical business operations, sound and controlled business operations, outsourcing, ICT risk management, product approval and review processes and customer due diligence and transaction monitoring. In this Q&AI series, we elaborate on this in more detail.

    Question 2. Third-party service providers: What do financial institutions (already) need to consider when they use third-party AI-solutions?
    The use of third-party AI solutions is likely to be captured by the outsourcing rules as well as the Digital Operations Resilience Act (DORA). The main characteristics of an outsourcing arrangement are (i) the engagement of a service provider; (ii) such service provider performing services that are part of the financial institution’s business operations/its regulated business or supporting its essential business processes; and (iii) the activities being performed by the service provider otherwise being performed by the financial institution itself. Additionally, third-party AI-solutions are captured by DORA. DORA applies as of 17 January 2025 and covers all (existing and new) ICT contracts, whether such contracts constitute outsourcing or not.

    Key requirements in relation to arrangements with third-party service providers following from the outsourcing rules and DORA are (i) management of risks related to arising from the arrangement with the service provider; (ii) monitoring the arrangement; and (iii) inclusion of specific provisions in the contract with the service provider. If the arrangement is deemed to be critical/important, there is an obligation to notify the regulator and additional requirements apply with the aim of ensuring business continuity of the financial entity and to stay in control.

    Finally, and irrespective whether an arrangement is captured by the outsourcing rules or DORA, financial institutions need to ensure sound and controlled business operations. Whilst this is a very broad requirement, regulators tend to use this requirement as a legal basis for guidance on topics for which there is no specific legislation (yet) for financial institutions. Existing guidelines also already include specific requirements in respect of managing third-party risks which could be extended to covering risk in respect of the use of AI when outsourcing. Relevant examples are the EIOPA Guidelines on outsourcing to cloud service providers for (re)insurers and the ESMA Guidelines on outsourcing to cloud service providers for entities that fall within the scope of MiFID, the AIFMD, UCITS, etc. For credit institutions and payment service providers, the EBA has published Guidelines on outsourcing arrangements. The existing guidelines on outsourcing are expected to continue to co-exist with DORA (as they have a different scope in part), but they will need to be adapted to align well with DORA.

    Question 3. ICT risks: What are the (current) requirements for financial institutions to manage ICT-risks related to AI applications?
    As set out in the previous Q&AI, the use of third-party AI-applications is likely to be captured by DORA (digital operational resilience act). DORA applies as of 17 January 2025. DORA establishes a uniform and comprehensive framework for the digital operational resilience of the financial sector. All financial institutions in scope of DORA will have to put in place sufficient safeguards to protect against ICT-risks. This will also include risks relating to the use of AI applications.

    The relevant requirements in this respect pursuant to DORA are (i) the management body being responsible and accountable for the ICT risk-management and policy framework; (ii) ICT-related incidents (and cyber threats) will have to be classified and major incidents will need to be reported to the regulator; (iii) an independent testing program needs to be established, including an advanced (‘threat-led penetration testing’) program for certain financial institutions; and (iv) risk management in relation to ICT third-party service providers, ranging from appropriate due diligence to contracting with and monitoring of ICT third-party service providers. Detailed terms are to be included in the contract with the ICT third-party service provider. The foregoing extends to the use of AI solutions, especially if these are obtained from third-party services providers.

    Finally, and irrespective whether an arrangement is captured by DORA, financial institutions need to ensure sound and controlled business operations. Whilst this is a very broad requirement, regulators tend to use this requirement as a legal basis to induce financial institutions to organise their business operations in a certain manner, including where ICT-risks and integrity risks resulting from the use of AI solutions may be involved. It should be noted that while DORA codifies and harmonises various rules and expectations regarding ICT risk-management, existing requirements and expectations from regulators already provide current guidance on managing ICT-risks. Relevant examples are the EIOPA guidelines on information and communication technology security and governance for (re)insurers and the EBA Guidelines on ICT and security risk management for credit institutions, investment firms and payment service providers.

    Question 4. Product distribution and client relationship: What do financial institutions need to take into account when developing financial products by using AI and when AI is used in the contact with the client?
    Financial institutions that offer or develop financial products are required to take into account the interests of the client and, if applicable, the interests of the beneficiary of the financial product. They need to be able to demonstrate that the product is a result of balancing those interests. In addition, product governance requirements include the need to identify a target market of the product. Subsequently, the product information and distribution will need to be tailored to the target market.

    If financial institutions use AI applications to develop financial products, they will need to ensure that they (continue to) comply with product development requirements and can provide evidence how they have done so. For instance while AI may help to determine the target market of a financial product and tailor it to the target market, human intervention may be appropriate to avoid that the financial product is based on (unconscious) biases or (unconscious) exclusion of certain client groups take place. The use of AI applications may also affect (either in a positive or a negative way) the professional competence of client facing employees of financial institutions. Employees have access to more information when using AI applications that may contribute to the knowledge of the employees, the information provided will be more consistent and client data from other files can be taken into account more easily. However, financial institutions cannot fully rely on AI applications.

    Employees providing advice will need to continue meeting the professional competence requirements, including obtaining the required certifications, irrespective of the availability of AI applications. In addition, we note that specific requirements in relation to the process of developing automated advice will apply. These requirements are in the final stages of the legislative process. It is not yet clear when these rules will apply. The Netherlands Authority for the Financial Markets (AFM) has already provided its view on automated advice. When providing automated advice but also when a financial entity uses AI applications in general, the duty of care will also need to be considered according to the AFM.

    Question 5. CDD and transaction monitoring: What are the key considerations when financial institutions use AI for CDD or transaction monitoring purposes?
    A financial institution acts as a 'gatekeeper' with regard to anti-money laundering and counter terrorism checks. On 18 October 2023, DNB published a new policy document ‘Wwft Q&As and Good Practices’ for consultation. DNB anticipates that the document will replace the existing ‘Guideline on the Anti-Money Laundering and Anti-Terrorist Financing Act and the Sanction Act’. In two of the good practices set out in the consultation document, DNB refers to the use of AI (GP 4.1: Good practice – intelligence and transaction monitoring and GP 4.15: Good practice – various alert generation methods). DNB mentions that the use of AI can contribute to effective transaction monitoring and that the use of AI models may better detect and investigate potential unusual patterns and complex transactions. It also follows from case law that a financial institution may use AI-applications for data analysis and statistical research when performing CDD and transaction monitoring, including in relation to assigning a risk profile and monitoring whether the client still fits within the assigned risk profile. These processes, including the use of AI, must be adequately documented to evidence that the processes in place result in the required outcome and thereby ensuring ethical business operations and more specifically compliance with the Dutch AML Act (Wwft). Financial institutions should for example avoid that the use of AI results in an unexplainable rejection of clients or unjustified flagging of transactions (‘false positives’).

    When using AI-applications for CDD or transaction monitoring, financial institutions should assess whether the use results in outsourcing of the performance of CDD or transaction monitoring. The Dutch AML Act sets specific conditions to outsourcing of CDD. Moreover,outsourcing of transaction monitoring is prohibited (unless – according to DNB - it is outsourced within the group). The CDD and transaction monitoring process needs to be adequately documented; this includes the use of AI-solutions. This may, for instance, be difficult for AI that involves (black box) machine learning.

    Finally, financial institutions should be mindful of the risks generally relevant for the use of AI-applications, including where AI-applications are used for CDD or transaction monitoring. Reference is made to the previous Q&As, which discuss those risks.

    Question 6. AI Act: What does the AI Act mean for financial institutions?
    The proposal for the AI Act sets out harmonised rules for the development, placing on the market, and use of AI in the EU. It proposes a risk-based approach, classifying AI systems according to risk and imposing various requirements on their development and use. The AI Act is still in development. EU legislative bodies are currently trying to reconcile their individual positions in trilogue negotiations. The European Parliament has approved its negotiating position and adopted amendments to the proposed text of the AI Act. The final text is expected to be adopted towards the end of 2023.

    The AI Act is a horizontal piece of legislation, which means that the rules apply across all different kinds of industries. As such, there is only a limited number of specific provisions that are focused on the use of AI in the financial sector. The only explicit references to financial use-cases are credit scoring models and risk assessment tools in the insurance sector. In this context, AI systems used to evaluate the credit scores, or creditworthiness of natural persons will likely be classified as high-risk, since they determine those persons access to credit. The same designation is expected for those AI systems that are used for risk assessment in the case of life and health insurance which, if not properly designed, can lead to financial exclusion and discrimination in respect of these essential financial products. The European Parliament proposes that AI systems deployed for the purpose of detecting fraud in the offering of financial services should not be considered as high-risk under the AI Act. There are, however, other elements that can be very relevant to the financial sector, such as the use of AI in respect of life and health insurances. Lastly, the AI Act will include certain specific exemptions for credit institutions.

    To avoid overlap with the existing requirements for financial institutions, the AI Act will ultimately create an additional framework to the existing framework for financial institutions. Please also see our Q&AIs for more information about the existing framework regarding: (i) the use of third-party AI solutions; (ii) management of ICT-risks; (iii) product development; (iv) distribution and the client relationship; and (v) CDD and transaction monitoring.

Related articles

Cookie notification

This functionality uses third-party cookies. Change your cookie preferences to view this content or view more information.
These cookies ensure that the website works properly. These cookies cannot be disabled.
These cookies can be placed by third parties, such as YouTube or Vimeo.
By deactivating categories, it is possible that related functionalities within the website may no longer work properly. It is always possible to change your preferences at a later time. View more information.