The Hong Kong Monetary Authority (HKMA) has published a circular setting out key observations and good practices relating to anti-money laundering (AML) and counter-financing of terrorism (CFT) control measures applicable to remote customer on-boarding initiatives. The feedback is based on recent thematic reviews, engagement with authorized institutions (AIs) and technology firms in the Fintech Supervisory Sandbox and Chatroom, as well as from information obtained through supervising virtual banks.
Remote on-boarding and digital delivery of financial services have become increasingly important, particularly due to the COVID-19 outbreak. The HKMA has provided specific high-level regulatory expectations on AML/CTF control measures associated with remote on-boarding to assist AIs. AIs launching remote on-boarding initiatives or considering such proposals are encouraged to review the regulatory expectations set by the HKMA, as well as the observations (on AIs reviewed) and good practices contained in the Annex to the circular.
The regulatory expectations (in bold), together with the key observations and good practices, are set out below.
AIs should adequately assess money laundering (ML) / terrorist financing (TF) risks associated with a proposed remote on-boarding initiative, prior to the launch of such initiative
- There is no prescribed format for the assessment; for some AIs the ML/TF risk assessment was part of a wider scope assessment and more formal in nature, while others were in a standalone format;
- Common steps taken in the pre-implementation phase include conducting due diligence on third-party vendors of remote on-boarding technology, testing reliability of the vendors’ technology solutions, and assessing possible impact and risks (e.g. impersonation risks) arising from remote on-boarding initiatives; and
- AIs that adopted off-the-shelf solutions for identity authentication and identity matching for remote on-boarding initiatives worked closely with third party vendors, allowing them to understand how the solutions functioned, as well as their limitations. AIs that had more limited knowledge were exposed to greater risks of the technology delivering unintended or inappropriate outcomes, leading to less effective overall management of associated risks.
AIs should apply a risk-based approach in the design and implementation of AML/CFT control measures for remote on-boarding initiatives
- AIs should be able to demonstrate that customer due diligence (CDD) measures implemented are commensurate with the ML/TF risks associated with a particular business relationship, irrespective of the means used to on-board a customer;
- AIs recognised that remote on-boarding may introduce different ML/TF vulnerabilities compared to traditional processes, accordingly, a phased approach was generally adopted when launching remote on-boarding services e.g. initially targeting lower-risk customer segments and/or limiting service scope; and
- Some AIs chose not to on-board higher risk customers remotely, while others conducted part of the process through teleconference or video conference for applicants that displayed some higher-risk characteristics.
AIs should monitor and manage the ability of the technology adopted to meet AML/CFT requirements on an ongoing basis
- All AIs adopted ongoing quality assurance processes over the effectiveness of the end-to-end AML/CTF controls for remote on-boarding, including the technology deployed, and were generally cautious in their initial approach;
- AIs generally applied manual checks of selfie images, ID documents and liveness detection processes during the early stages of implementation to assess performance (e.g. false-acceptance rate and false-rejection rate) and to identify any emerging risks. Manual checks are helpful for identifying any abnormalities and for implementing appropriate risk mitigating measures or contingencies e.g. where the artificial intelligence applications do not perform as intended; and
- All AIs considered some form of post-implementation review after remote on-boarding initiatives were launched (i.e. within a period of 6 to 12 months after implementation).
Ongoing monitoring should take into account vulnerabilities associated with the product and the delivery channels of the product
- It is important for AIs to implement a monitoring system which is tailored to the risk profile of a customer relationship, particularly in circumstances where ML/TF risks will often only become apparent upon operation of the customer account;
- Some AIs indicated that they planned to apply specific rule-based detection scenarios to monitor transactions of customers on-boarded remotely, others are using or exploring different data points to monitor customer behaviour e.g. data obtained for fraud prevention purposes; and
- Good practices were noted in relation to the regular sharing of information and intelligence e.g. some AIs established internal working groups, others held regular meetings to exchange information, conduct trend analysis and investigate ML-related fraud cases.