Artificial Intelligence (AI) has the potential to reshape the financial services sector and transform how customers interact with their information, but it also raises important challenges and risks. We weigh the tradeoffs of integrating AI into the financial services space, asking the question: “is it worth the risk?”.
AI introduces a new suite of tools into the financial sector and more broadly. Used in the right hands, the next generation of AI-powered tools can help financial services and products reap a host of benefits for both their consumers and the company’s bottom line.
Ensuring that consumers make the appropriate financial decisions has always been challenging, increasingly so as financial products and services become more complex. Traditionally, consumer protection was achieved by requiring financial institutions (FI) to disclose key product information such as risks, prices, and product features. It was assumed that such disclosures would steer consumers to the right decisions. Legislative frameworks are now increasingly requiring financial institutions to also consider a customer’s needs and circumstances when suggesting or selling products. AI has the potential to transform how the industry interacts with its customers, including the amount and type of information that will be disclosed as well as the recommendations FIs may make with respect to products or services.
By analyzing vast amounts of data, including customer behaviours, social media, demographics and transaction histories, AI can enable financial institutions to offer hyper-customized products to meet a customer’s needs and preferences. Product disclosures could also be tailored to the customer’s particular risk profile or budgetary needs. In addition, the realm of predictive AI may alter how consumers think about and respond to financial risks through its ability to forecast potential future financial issues, suggest preemptive solutions (e.g., budget advice) and even predict market trends for portfolio management.
Technology is changing the way companies do business and how they allocate internal resources. By automating repetitive tasks such as data entry, document processing and customer onboarding, AI can improve operational efficiencies, increase productivity and reduce errors. For example, AI chatbot interfaces can revolutionize the new customer onboarding process: “By analyzing patterns and trends in customer behavior, these chatbots can anticipate potential roadblocks or confusion points and offer preemptive solutions”1. Additionally, AI can free up human resources by automating decision-making in resource-intensive processes, such as loan origination or insurance underwriting. Accelerating processing time will also improve customer satisfaction.
AI and machine learning (ML) technologies are increasingly being leveraged in risk management to satisfy compliance obligations and improve business, investment and credit decisions. Examples of such applications include:
In terms of evaluating potential customer risks, AI can improve an institution’s credit decision-making process by expanding the range of considered data from traditional, more statistical and historical data to a much wider range of data points, such as social media activity, financial transactions, behaviours and even mobile phone usage patterns. This development not only allows for a more accurate evaluation of a borrower’s creditworthiness but can also expand credit offerings to “unbanked” populations.
Alongside its many advantages, AI presents risks to consumers and financial institutions, such as fraudsters leveraging AI tools, misinformation dissemination and use, and security breaches of sensitive banking data.
The “self-learning” capabilities of AI systems that constantly improve the system’s ability to fool computer-based detection systems magnify both the nature and scope of potential fraud2. For example, AI tools can be used to make fictional videos, audio recordings and documents designed to replicate the likeness, voice or writing of an individual. Often referred to as “deepfakes”, these falsified clips and documents are sometimes used to target more vulnerable segments of the population, who are more likely to mistake them as authentic. This is occurring with increasing frequency: a recent report by identify verification platform Sumsub reported that deepfake incidents in the fintech sector increased by 700% in 2023 from the previous year3.
“Deepfake incidents” not only increase the risk of fraud but can also lead to the dissemination of misleading information, thus posing a significant risk to the integrity of information and decision-making processes. The creation of synthetic data and insights through AI increases the potential dissemination of false or misleading information4. As noted above, one of the advantages of AI is the leveraging of a wider scope of information to arrive at a more customized product offering. However, if the data used is incorrect, incomplete or imbued with biases that influence the AI’s output, the product offering can lead to inappropriate offerings and, ultimately, bad decisions.
AI depends on an immense amount of data. The data used to train and use AI tools with applications in the financial sector includes personal information and other sensitive data (for more on data integrity, read “What should be included in my organization’s AI policy?: A data governance checklist”). This can engage a number of privacy compliance considerations, such as whether sufficient consent has been obtained to use an individual’s personal information for the model’s given purpose, questions regarding the ability to retain or delete the data in question, and the extent of control that AI vendors have over data their customers feed into their models. Moreover, AI systems also present a suite of new potential cybersecurity vulnerabilities for threat actors to attempt to exploit, whether by obtaining data meant to be kept confidential or compromising the system’s functionality (for more on cyberthreats, read “A sword and a shield: AI’s dual-natured role in cybersecurity”).
Although existing Canadian law equally applies to AI as to other technologies, there are several areas of law, such as human rights, privacy, tort and intellectual property, that directly apply to AI. In addition, there are several new government initiatives that focus specifically on AI:
The future of AI integration with financial services and products presents some tantalizing potential opportunities; however, because AI is built for automation and scale, even minor risks may accordingly scale up quickly, posing serious risks to the financial sector and consumers. As noted in the Federal Government Consultation Paper: Proposals to Strengthen Canada’s Financial Sector5:
Financial sector adoption of AI could enhance the consumer experience by offering faster, more personalized financial services, while also increasing efficiency, improving risk management, and advancing the ability of financial institutions to detect and prevent fraud in real-time. However, AI could also introduce new or heightened risks to consumers and financial institutions, which must be mitigated to ensure the ongoing stability of the financial sector.
Over the coming months, we will publish a series of articles focused on the benefits of AI in the financial services space and the efforts of regulators and policymakers to harness the benefits of AI while mitigating potential risks.
This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2024 by Torys LLP.
All rights reserved.