Are AI transactions different from traditional tech deals?

 
AI has dominated the conversation across markets thanks to transformer models, such as ChatGPT, providing mass consumer access to generative AI products and occupying investors' interest. With no end to the acceleration and adoption of this technology in sight, the markets are responding; however, due to the greater nuances and risks that this technology presents, those dealing in AI M&A face additional considerations compared to traditional emerging technology transactions. How are the markets responding to this growing demand for AI technologies?

The current state of investment in AI

Globally, enterprises are integrating value-driven AI through either licensing agreements or buying the technology outright, with products for marketing and sales, product-and-service development, and IT-related functions at the top of many companies’ wish lists1. Companies in sectors not traditionally seen as technology sectors—such as mining and oil and gas supply chain participants—are also looking at ways of implementing AI solutions in their operations and offerings.  

Global deal flow is concentrated on horizontal platforms that build, deploy, or fine-tune AI models2. The U.S. and Canada are leading the way in private equity and venture capital dollars invested into AI and machine learning companies—resulting in $5.05 billion invested across 147 transactions from the beginning of 2023 to May 20243.

Global venture capital investment in AI is forecasted to hit US$12 billion by the end of 20244, and AI’s contributions to global GDP are expected to hit US$500 billion over the next decade5. According to Statista, the market size in Canada is expected to show an annual compound growth rate of 27.67% between 2024 and 2030, resulting in a market volume of C$24.08 billion by 20306.

An increasing number of investors are relying on AI to source data-driven investment opportunities, build financial models, and manage assets7. In a recent global survey of investment managers, over half of the participants (54%) flagged that they are currently using AI within their investment strategy or asset-class research, with 37% planning to adopt the technology8.

As a response to this growing use by federally regulated financial institutions (FRFIs) and federally regulated pension plans (FRPPs), the Office of the Superintendent of Financial Institutions (OSFI) is set to release an updated guideline on the governance and risk management framework for data modelling. The guideline, which may require significant changes to policies and procedures, comes into effect on July 1, 2025. It remains to be seen what implications the updated guideline and other forthcoming regulatory changes will have on banks and other financial institutions, but the financial services sector is not alone in responding to emerging AI regulation that may well impact dealmaking and other activities in the future (for more on AI regulatory efforts, please read “What’s new with artificial intelligence regulation in Canada and abroad?”).

AI deal practices and considerations

An enhanced due diligence approach 

AI adoption is occurring through implementation (whether via licensing or direct development) and acquisition by corporates and financial investors.

Organizations need to understand what they are buying, what can happen if things go wrong, and have a clear framework of accountability if they do.

When integrating AI into enterprise operations, it is vital to have complete transparency on how proprietary knowledge and data will be used to train models—and on whether the application is built on a third-party foundational model. How information is inputted, stored, and computed, and who owns the base technology, will impact output, ownership, and associated liability. Organizations need to understand what they are buying, what can happen if things go wrong, and have a clear framework of accountability if they do.

Buyers and investors should consult knowledgeable experts to help them find and identify the real value and risk within a target product or company. This can often be more complex in AI deals (particularly generative AI) due to the multi-faceted and layered nature of the technology, the rapid changes it undergoes because of its generative nature, and the limited availability of real experts. As a result, the ability to identify risks—and assess whether an organization is capable of taking on those risks—becomes more critical than in slower-moving areas.

Legal liabilities

When acquiring AI companies or technology, investors need to be prepared to accept legal risks they would otherwise not have to consider. For example, many of the “market” AI reps carve out use of third-party data and ask for compliance with applicable standards and laws. Sellers of AI products will often schedule instances of third-party data use against such reps and—given the sector's rapid evolution—push back on compliance with the market standards portion flagged in those reps.

Investors should assess model governance, R&D activities, proprietary technology, and the company’s ability to manage and mitigate risk, in addition to identifying where the responsibility lies for the quality of predictions, incentives for acting in good faith, and liability across each layer of the company’s technology stack.

AI has also become a common tool within the due diligence process itself; however, while it offers greater efficiencies, organizations must keep in mind its limitations, and have a team prepared to assess the information that the AI has pulled9.

Betting on unknown outputs? Bring in your tech experts

As with all technology transactions, buyers must understand how a proposed AI model works, which use cases hold real value and what has limited growth potential. However, companies and investors in the AI buying cycle must understand the technology as it currently stands and assess potential outputs that have yet to be created. It must be tested on its accessibility by those who possess deep technological know-how so that what is being bought and how it will integrate with current, and planned, operations can be identified and mapped out.

While the interfaces of some AI applications may be similar, all models work individually and have different underlying stacks. These different training models result in different learnings, which can limit their interchangeability. Investors and acquirers must review each model on an individual basis—without guarantees of interoperability.

Depth of the data moat

Whether integrating an AI-enhanced application into operations, building an in-house model, or investing in an AI company, a core priority is ensuring effective learning while keeping underlying data safe. As data drives value, protecting that underlying data, and any models built on it, is key to keeping a strategic advantage. If leveraging existing internal data, safety protocols, privacy policies, and contracts must be put in place for all parties involved. This includes adopting carefully considered, well-structured IP and cybersecurity strategies and carrying out regular audits of protocols and procedures that keep proprietary data safe.

Whose IP is it anyway?

When implementing a model based on external, often copyrighted, data, questions must be asked about the IP rights of what is being bought, how the model is designed to process inputs, and if the seller has the right to sell. Both AI acquirors and investors must map out who owns the inputs and outputs of the data, evaluate its scalability, and determine, if any, the level of consent from third-party IP owners.

Current Canadian legislation offers no consensus, and little guidance, on parameters around when work created by AI could be considered copyrightable, and therefore owned by the AI (for more on questions about the intersection of AI and IP, read “Does AI have patent and copyright ownership?”). This makes it even more critical to identify the owner of the source information, to ensure no IP violations occur.

Trusting and verifying

Amid a noisy market, investors and corporate buyers are becoming increasingly discerning and placing higher importance on sustainable growth, trustworthiness, veracity, safety, and obsolescence potential.

This has opened the door to companies that help to test the resiliency of AI products, encouraging lawmakers to introduce legislation that governs AI's design, development, and use. Testing the model is also an important factor when seeking risk mitigation, as a model must be tested to be insured. Buyers of AI are increasingly looking to a new breed of AI-focused insurers to offer expertise and assessments, with global AI insurance premiums projected to potentially reach US$4.73 billion by 203210.

As the race for AI opportunities continues, dealmaking practices are sure to adapt.


  1. McKinsey, “The state of AI in early 2024”: Gen AI adoption spikes and starts to generate value, May 30, 2024.
  2. PitchBook, 2024.
  3. “Artificial Intelligence, Canada”, Statista Market Insights.
  4. Chris Gillam and Judy Wade, “How investors are navigating the AI era”, CPP Investments, March 14, 2024.
  5. Jennifer Brown, “Due diligence gets a makeover”, Canadian Lawyer, April 30, 2019. This article features Managing Partner of Torys' Legal Services Centre, Chris Fowles, in a discussion on innovation in M&A.
  6. Sandee Suhrada, Kate Schmidt and Disank Jain, “Providing insurance coverage for artificial intelligence may be a blue ocean opportunity”, Deloitte, May 29, 2024.

This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.

To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2024 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now