AI in financial services: are consumers better protected, or more at risk?

 
Artificial Intelligence (AI) has the potential to reshape the financial services sector and transform how customers interact with their information, but it also raises important challenges and risks. We weigh the tradeoffs of integrating AI into the financial services space, asking the question: “is it worth the risk?”.

Efficiency and ease: advantages of AI 

AI introduces a new suite of tools into the financial sector and more broadly. Used in the right hands, the next generation of AI-powered tools can help financial services and products reap a host of benefits for both their consumers and the company’s bottom line.

Enhancing the consumer experience

Ensuring that consumers make the appropriate financial decisions has always been challenging, increasingly so as financial products and services become more complex. Traditionally, consumer protection was achieved by requiring financial institutions (FI) to disclose key product information such as risks, prices, and product features. It was assumed that such disclosures would steer consumers to the right decisions. Legislative frameworks are now increasingly requiring financial institutions to also consider a customer’s needs and circumstances when suggesting or selling products. AI has the potential to transform how the industry interacts with its customers, including the amount and type of information that will be disclosed as well as the recommendations FIs may make with respect to products or services.

Because AI is built for automation and scale, even minor risks may accordingly scale up quickly, posing serious risks to the financial sector and consumers.

By analyzing vast amounts of data, including customer behaviours, social media, demographics and transaction histories, AI can enable financial institutions to offer hyper-customized products to meet a customer’s needs and preferences. Product disclosures could also be tailored to the customer’s particular risk profile or budgetary needs. In addition, the realm of predictive AI may alter how consumers think about and respond to financial risks through its ability to forecast potential future financial issues, suggest preemptive solutions (e.g., budget advice) and even predict market trends for portfolio management.

Increasing operational efficiencies

Technology is changing the way companies do business and how they allocate internal resources. By automating repetitive tasks such as data entry, document processing and customer onboarding, AI can improve operational efficiencies, increase productivity and reduce errors. For example, AI chatbot interfaces can revolutionize the new customer onboarding process: “By analyzing patterns and trends in customer behavior, these chatbots can anticipate potential roadblocks or confusion points and offer preemptive solutions”1. Additionally, AI can free up human resources by automating decision-making in resource-intensive processes, such as loan origination or insurance underwriting. Accelerating processing time will also improve customer satisfaction.

Improving compliance and risk management

AI and machine learning (ML) technologies are increasingly being leveraged in risk management to satisfy compliance obligations and improve business, investment and credit decisions. Examples of such applications include:

  • AI/ML technologies such as natural language processing and text mining can be used to monitor sales and trader activity for non-compliance with regulatory requirements.
  • The implementation of AI technologies not only improves the monitoring of compliance with anti-money laundering (AML) legislative requirements but can also automate the AML process management, thus increasing an institution’s compliance abilities as well as its operational efficiencies.

In terms of evaluating potential customer risks, AI can improve an institution’s credit decision-making process by expanding the range of considered data from traditional, more statistical and historical data to a much wider range of data points, such as social media activity, financial transactions, behaviours and even mobile phone usage patterns. This development not only allows for a more accurate evaluation of a borrower’s creditworthiness but can also expand credit offerings to “unbanked” populations.

Opening the door to bad actors: risks of AI

Alongside its many advantages, AI presents risks to consumers and financial institutions, such as fraudsters leveraging AI tools, misinformation dissemination and use, and security breaches of sensitive banking data.

Fraud

The “self-learning” capabilities of AI systems that constantly improve the system’s ability to fool computer-based detection systems magnify both the nature and scope of potential fraud2. For example, AI tools can be used to make fictional videos, audio recordings and documents designed to replicate the likeness, voice or writing of an individual. Often referred to as “deepfakes”, these falsified clips and documents are sometimes used to target more vulnerable segments of the population, who are more likely to mistake them as authentic. This is occurring with increasing frequency: a recent report by identify verification platform Sumsub reported that deepfake incidents in the fintech sector increased by 700% in 2023 from the previous year3.

Misinformation

“Deepfake incidents” not only increase the risk of fraud but can also lead to the dissemination of misleading information, thus posing a significant risk to the integrity of information and decision-making processes. The creation of synthetic data and insights through AI increases the potential dissemination of false or misleading information4. As noted above, one of the advantages of AI is the leveraging of a wider scope of information to arrive at a more customized product offering. However, if the data used is incorrect, incomplete or imbued with biases that influence the AI’s output, the product offering can lead to inappropriate offerings and, ultimately, bad decisions.

Data protection

AI depends on an immense amount of data. The data used to train and use AI tools with applications in the financial sector includes personal information and other sensitive data (for more on data integrity, read “What should be included in my organization’s AI policy?: A data governance checklist”). This can engage a number of privacy compliance considerations, such as whether sufficient consent has been obtained to use an individual’s personal information for the model’s given purpose, questions regarding the ability to retain or delete the data in question, and the extent of control that AI vendors have over data their customers feed into their models. Moreover, AI systems also present a suite of new potential cybersecurity vulnerabilities for threat actors to attempt to exploit, whether by obtaining data meant to be kept confidential or compromising the system’s functionality (for more on cyberthreats, read “A sword and a shield: AI’s dual-natured role in cybersecurity”).

Government initiatives to leverage the benefits of AI while mitigating the risks

Although existing Canadian law equally applies to AI as to other technologies, there are several areas of law, such as human rights, privacy, tort and intellectual property, that directly apply to AI. In addition, there are several new government initiatives that focus specifically on AI:

  • Earlier this year, the Office of the Superintendent of Financial Institutions released an updated draft of Guideline E-23 Model Risk Management, which sets out OSFI’s expectations with respect to an institution’s enterprise-wide AI model risk management. The industry awaits the final release of the Guideline.
  • The proposed federal Bill C-27, which includes the Artificial Intelligence and Data Act (AIDA) if passed, would implement Canada’s first general artificial intelligence statute. The AIDA creates Canada-wide obligations and prohibitions pertaining to the design, development and use of artificial intelligence systems in the course of international or interprovincial trade and commerce. This applies, in its current draft, to any “technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions”. Bill C-27 is currently subject to committee review following its second reading in the House of Commons (for more on this topic, read “What’s new with artificial intelligence regulation in Canada and abroad?”).
  • More recently, the federal government launched the third phase of its review of the legislative frameworks governing federal financial institutions by publishing the consultation paper, Proposals to Strengthen Canada’s Financial Sector. The Paper highlights the Department of Finance’s intention to develop a regulatory approach to AI in the financial sector. In collaboration with an AI expert, the Department of Finance plans to consult a broad range of domestic and international stakeholders in its effort to identify and assess the potential risks of AI and to develop a federal strategy for AI in the financial sector.

Conclusion

The future of AI integration with financial services and products presents some tantalizing potential opportunities; however, because AI is built for automation and scale, even minor risks may accordingly scale up quickly, posing serious risks to the financial sector and consumers. As noted in the Federal Government Consultation Paper: Proposals to Strengthen Canada’s Financial Sector5:

Financial sector adoption of AI could enhance the consumer experience by offering faster, more personalized financial services, while also increasing efficiency, improving risk management, and advancing the ability of financial institutions to detect and prevent fraud in real-time. However, AI could also introduce new or heightened risks to consumers and financial institutions, which must be mitigated to ensure the ongoing stability of the financial sector.

Over the coming months, we will publish a series of articles focused on the benefits of AI in the financial services space and the efforts of regulators and policymakers to harness the benefits of AI while mitigating potential risks.


  1. Satish Lalchand, Val Srinivas, Brendan Maggiore and Joshua Henderson, “Generative AI is expected to magnify the risk of deepfakes and other fraud in banking”, Deloitte, May 29, 2024. 
  2. Isabelle Bousquette, “Deepfakes are coming for the financial sector”, The Wall Street Journal, April 3, 2024.

This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.

To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2024 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now