Authors
Tim Pavlov
As criminals grow more sophisticated in their money laundering techniques, the fight against financial crime has become an increasingly complicated arms race. Criminals are using new technologies to create complex and layered transactions, making it difficult for financial institutions to monitor and detect suspicious transactions using traditional methods.
In response to the challenge, financial institutions are investing in supplementing or replacing their traditional anti-money laundering (AML) software with more sophisticated AI-based technologies. Meanwhile, regulators must ensure that their supervisory frameworks are capable of evolving alongside this rapid deployment of AI-based AML technologies—without impeding the adoption of innovative approaches to combat emerging threats.
This article explores the benefits and limitations of AI-based AML technologies and how the financial services industry and regulators in Canada can work together to sharpen their surveillance and win the fight against money laundering.
In the fight against money laundering, financial institutions have traditionally relied on AML software that is rule- and scenario-based, offering basic statistical approaches for transaction monitoring. These tools look for red flags that could indicate criminal activity or suspicious transactions based on preprogrammed patterns—for example, by searching for deposits above certain thresholds, whether a bank customer is included in an international sanctions list, or transfers of amounts out of an account that are similar to those recently paid in. But as criminals become more sophisticated in their tactics, they launder their proceeds in ways that can appear to be legitimate financial transactions. This means that traditional AML tools are often ineffective in identifying fraudulent activities: as a result, they can return a high number of “false positives”, which require costly, manual and onerous efforts on the part of financial institutions to identify the fraudulent transactions amongst the legitimate.
AI-based AML technologies, on the other hand, use comprehensive machine learning techniques to increase the accurate detection of suspicious activity while reducing false positive alerts. These technologies can detect hidden transaction patterns among networks of people, compare behaviours with those that are historically common for an organization or its peers, assign risk scores to customers based on their past activity and other customer-related information, and triage events to close or deprioritize low-risk investigations. Moreover, machine learning models are more flexible in quickly adjusting to new trends and continually improve over time. According to a 2022 McKinsey & Company report1, by replacing rules-based software tools with AI-based AML applications, financial institutions can improve their identification of suspicious activities by up to 40 percent while substantially reducing their number of “false positives”.
Given these advantages, financial institutions are increasingly adopting AI-based AML technologies in their operations. According to a recent survey from AI chip maker NVIDIA, 91 percent of financial services firms in the US are either assessing AI or already using it to improve services and reduce fraud2.
AI-based AML technologies have tremendous potential to assist financial institutions in enhancing the effectiveness, efficiency and accuracy of core money laundering and terrorist financing risk detection and reporting systems. But in order to exploit the AI’s full potential, financial institutions must understand where those technologies can be useful and effective—and where they cannot. AI-based AML technologies can be advantageous when they have access to sufficient, high-quality data, as well as a variety of data attributes. However, when there is not enough existing data to build forward-looking intelligence, the benefits of AI-based technologies are less certain. In such instances, a traditional approach that relies on rule- and scenario-based tools may be more effective.
Among the many benefits of using AI-based AML technologies, the most notable include:
While the benefits of implementing AI-based technologies in AML are substantial, financial institutions should be aware of certain key limitations:
As the adoption of AI-based technologies in the financial services industry continues to accelerate, regulators are faced with a challenging task: they must reduce regulatory obstacles and encourage the industry to adopt innovative approaches to combat financial crimes while also ensuring that the supervisory framework can evolve to effectively address emerging threats in the industry. FINTRAC appears poised for the challenge, as it is adopting AI tools for its own use3.
Given these concurrent priorities, collaboration between financial institutions and regulators is vital for the future of AI in AML. By staying up to date on the sector’s evolving regulatory landscape, financial institutions can be better positioned to identify potential risk exposures and align their AI development and use with upcoming legislative expectations.
While Canada currently has no AI-specific regulatory framework, the legal landscape will be changing very quickly in the near future. Several upcoming and proposed legislative reforms in Canada address AI directly, the most significant of which is the proposed Artificial Intelligence and Data Act (AIDA). If passed, AIDA would regulate the design, development and use of AI systems in the private sector, and impose strict penalties on unlawful or fraudulent conduct resulting from the use of AI systems. To learn more about AIDA, read What’s new with artificial intelligence regulation in Canada and abroad?
The Office of the Superintendent of Financial Institutions (OSFI) has also been paying a lot of attention to technology-related risks and has recently undertaken measures to regulate AI in the financial services sector. These actions come in the wake of OSFI’s 2024-2025 Annual Risk Outlook, which indicates that OSFI is assessing the impact of AI adoption on the risk landscape and strengthening existing guidelines to decrease AI-related risks4. OSFI’s recent updated draft guidance on the responsible use of AI highlights these priorities, with a particular focus on effective AI governance, the use of data, model development and requirements for a model risk-management framework, the ethics of AI systems, and explainability of such systems to customers5. Earlier this year, OSFI and the Financial Consumer Agency of Canada (FCAC) asked financial institutions to complete a voluntary questionnaire aimed at assessing their readiness to adopt AI technologies into their operations.
As cybercriminals continue to develop increasingly sophisticated money laundering techniques, regulators and financial services must work together to subvert their efforts, leveraging new technologies while balancing regulatory obligations. By developing AI tools that keep pace with an evolving regulatory framework, financial institutions will be able to leverage AI’s full potential while ensuring their compliance programs remain robust, transparent, and effective.
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2024 by Torys LLP.
All rights reserved.