What’s new with artificial intelligence regulation in Canada and abroad?

 
Since publishing our comprehensive guide to artificial intelligence regulation in Canada in April 2023, the rapid rise of AI has prompted countries to revise their approaches to safeguarding the deployment of AI systems. In this article, we examine differing approaches to AI regulation across Canada, the United States and Europe, and ask: as industries strive to keep pace with constantly evolving AI technology, how are jurisdictions in Canada and abroad shifting their regulatory policies?

AI regulation in Canada: what is the current and future (and in-between) state?

The AI and Data Act remains in Parliament

Canada’s first comprehensive AI legislation, the AI and Data Act (AIDA) is still making its way through the House of Commons as part of Bill C-27 (the Bill). It is currently being considered by the Standing Committee on Industry and Technology after its second reading. In November 2023, the Minister of Innovation, Science and Industry wrote to the Committee proposing a number of substantial amendments in response to stakeholder feedback. These amendments are likely to be considered by the Committee as it continues its clause-by-clause review; however, given the committee’s progress to date, the Bill is unlikely to receive Royal Assent before the end of this year.

New sector-specific laws and guidance

Beyond new laws aimed specifically at regulating AI, many existing areas of law are fast becoming a key feature of the AI regulatory landscape in Canada, notably including employment, privacy, and human rights.

In March 2024, Ontario became the first province to pass a law requiring employers to disclose the use of AI in the hiring process—specifically, in the screening, assessment, or selection process for applications to a position (while passed, a date has not yet been set for the requirement to come into force). Though it is the first Canadian jurisdiction to impose such a requirement, Ontario follows in the footsteps of New York City’s Local Law 144, which requires similar transparency measures in the hiring process and which became enforceable in July 2023 (for more on the intersection of AI and employment law, read “Can HR use AI to recruit, manage and evaluate employees?”).

On the privacy side, federal and provincial regulators in Canada came together in December 2023 to issue joint guidance for public and private sector entities on how to ensure compliance with existing privacy laws (including PIPEDA) while developing, providing, or using generative AI in their operations. This guidance emphasizes the protection of vulnerable groups, confirms that privacy law principles of consent and transparency remain paramount in the Gen AI context, and outlines concrete practices for businesses to document compliance with privacy laws when making use of generative AI. Certain uses of Gen AI were deemed “no-go zones” under existing privacy laws, like creating AI-generated content for malicious or defamatory purposes. While this guidance is non-binding, it is an interpretation of existing, binding privacy laws and provides critical insight into how privacy regulators are going to adjudicate privacy-related Gen AI complaints.

Other lawmakers have also recently highlighted the malicious use of Gen AI as a key concern, particularly the creation of “deepfake” intimate images without consent1. For example, recently proposed federal legislation aimed at promoting online safety (Bill C-63) currently includes deepfake images in its definition of “intimate content communicated without consent”—a form of harmful online content that Bill C-63 seeks to address by placing content moderation obligations on social media services. For more on developing responsible AI practices, read “What should be included in my organization’s AI policy?: A data governance checklist”.

Looking ahead in Québec

Québec’s recent privacy law reforms have resulted in the regulation of some technologies using artificial intelligence. In particular, fully automated decision-making processes should be disclosed to individuals. The collection of information using technological tools offered to the public which have the functions to identify, locate or profile individuals should also be disclosed and, in some circumstances, should be activated by the users.

Despite these advances, there have also been calls for the implementation of a regulatory framework to specifically govern the use of artificial intelligence. In January 2024, the Québec Innovation Council (the Council) submitted a comprehensive report to the Ministry of the Economy and Energy, outlining steps and recommendations for such a regulatory framework. Based on consultations with hundreds of experts, the Council recommends a review of existing laws, particularly in the field of employment, to accommodate the rapid changes in artificial intelligence, as well as the creation of an interim AI governance steering committee to work on regulating AI and integrating it into Québec society.

Recommendation for businesses

The overall posture in Canada remains forward-looking. While a patchwork of AI-specific requirements is taking shape, there is much that remains to be passed, confirmed, worked out, and implemented.

Even with omnibus AI legislation still before Parliament, Canadian businesses should be mindful of existing legal risks that can arise from the use of AI (including those related to privacy, intellectual property, consumer protection, and employment) and how to manage them. Many businesses have developed, or are developing, an AI governance framework to document internal AI policies, requirements, and accountabilities. An AI governance framework that aligns with best practices will help manage existing legal risks and ease the compliance burden as new requirements come into force.

Even with omnibus legislation still before Parliament, Canadian businesses should be mindful of existing legal risks that can arise from the use of AI and how to manage them.

Businesses that are incorporating AI systems and tools into their operations should stay on top of these developments even before they are legally binding, to avoid running the risk of having to adopt costly and difficult compliance measures after their AI practices are already entrenched once enforcement begins.

How is AI being regulated beyond Canada?

Around the world, a patchwork of regulations, treaties, and guidelines is emerging in response to the increasingly widespread adoption of AI. These key measures are designed to promote the responsible development, distribution, and use of AI in Europe, the United States, and international law.

Europe: AIA and related directives

EU’s Artificial Intelligence Act

The main legislative response to AI in the European Union is the Artificial Intelligence Act (the AIA), which came into force in August 2024 and is set to take effect incrementally over the next two years. The AIA imposes obligations pertaining to risk management, data governance, documentation and record-keeping, human oversight, cybersecurity, transparency, and quality control, among others.

The AIA applies to providers, deployers, importers, distributors, and other actors responsible for AI systems within the EU, and to providers and deployers located outside the EU whose AI outputs are used within the EU. As with the EU’s data privacy regulations, this means that the AIA can apply to Canadian businesses with operations or customers in the EEA.

Notably, the AIA prohibits the use of AI systems that pose an “unacceptable risk”, including—for example—social scoring, real-time biometric identification and categorization (with some exceptions), and systems that cause harm by manipulating behaviour and exploiting vulnerable groups. Similar to how Canada’s AIDA focuses on “high-impact systems”, the AIA also focuses on systems that pose a “high risk” along with its regulation of certain aspects of general-purpose AI systems. High-risk systems include those used in the management and operation of critical infrastructure, educational and vocational training, granting access to essential services and benefits, law enforcement, border control, and the administration of justice.

Other EU directives and frameworks

A proposed complement to the AIA, the AI Liability Directive would—if passed—provide a non-contractual civil liability framework for persons harmed by AI systems.

In May 2024, the Council of Europe adopted the Framework Convention on Artificial Intelligence (the Convention), the first international treaty regulating the use of AI. The Convention sets out a framework that would apply to public authorities and private actors whose AI systems have the “potential to interfere with human rights, democracy and the rule of law.” It also contains provisions requiring AI providers to disclose the use of AI, conduct risk and impact assessments, and provide complaint mechanisms. The Convention is open to signatures from member states as of September 5, 2024.

In Spring 2024, the Council of Europe and European Parliaments reached a provisional agreement on the Platform Work Directive (the Directive) which introduces rules regulating the working conditions of platform—or “gig economy”—workers. The Directive ensures that platform workers cannot be dismissed solely based on automated decision-making processes, and that some degree of human oversight is required for decisions directly affecting persons performing platform work. 

United States: the Algorithmic Accountability Act of 2023 and state legislation

Federal level regulation

There is currently no proposed comprehensive federal legislation in the U.S. aimed at regulating AI like the AIDA in Canada or the AIA in the EU—although legislation governing the use of automated decision-making in similar “high-impact” contexts was proposed in the Senate in 2023. 

This piece of federal legislation, the Algorithmic Accountability Act of 2023, would regulate the use of automated decision-making processes in “high-impact” scenarios, such as housing, finance, employment and education. The draft proposes that the Federal Trade Commission (FTC) establish a Bureau of Technology to see to its enforcement and implementation. 

Other U.S. federal legal instruments aimed at regulating AI include the following:

  • National Institute of Standards and Technology (NIST): NIST, an agency of the United States Department of Commerce, published a voluntary set of guidelines in 2023 titled the AI Risk Management Framework, with the goal of managing AI-related risk and increasing trustworthiness in the design, development, and use of AI systems.
  • The White House: Signed by President Biden in October 2023, Executive Order 14110 on AI: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence sets out a principles-based approach to governing the development and use of AI by U.S. government executive departments and agencies. The Order focuses on: (i) safety and security, (ii) responsible research and development, (iii) equity and anti-discrimination, and (iv) safeguarding privacy and civil liberties. Among other priorities, the Order mandates the development of federal standards in accordance with its guiding principles.

State level regulation

Several U.S. states have proposed or enacted state legislation to regulate the development, provision, or use of AI in the private sector. These include Colorado, Utah, Illinois, Massachusetts, Ohio, and California. These state-level laws are more often concerned with specific categories of AI systems, including higher-impact systems and generative AI systems.  

California’s SB 1047, which has been passed by the California Senate and the California State Assembly but has not yet been signed into law, has garnered a fair amount of attention given the state’s status as a stronghold of the tech industry. The law is focused on the safe development of AI models and will apply to AI developers that offer services in California regardless of whether the developer is headquartered there. SB 1047 aims to regulate only large and powerful AI models: unlike Canada and Europe, whether or not the system is covered by the law will depend on the computing power of the system rather than qualitative factors about its use or purpose. Developers will be subject to various testing and safety requirements for covered systems, as well as compliance enforcement measures including civil penalties up to 10% of the cost of the computing power used to train the model.

International instruments

Several public international legal bodies have issued instruments or guidance in this area. Though generally nonbinding, they are nevertheless instructive regarding policy priorities at the highest levels of international law, both for private and public entities.

  • United Nations: In March 2024, the UN General Assembly adopted a non-binding resolution, Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development, which endorses an approach to AI regulation that prioritizes safety, respect for human rights and fundamental freedoms, and global inclusivity.
  • Organisation for Economic Co-operation and Development (OECD): Adopted in 2019 and amended in May 2024, the OECD’s Recommendation of the Council on Artificial Intelligence provides non-binding guidance for member states and AI stakeholders with respect to the cross-sectoral regulation of AI, including principles for responsible stewardship of AI and recommendations for AI governance.
  • The G7: The G7’s non-binding Hiroshima AI Process Comprehensive Policy Framework, launched in 2023, adopts an AI policy framework based on the following principles: promoting safe, secure, and trustworthy AI; providing actions for organizations to follow when designing, deploying, and using AI; analyzing priority risks, challenges, and opportunities of generative AI; and promoting project-based cooperation for the development of AI tools and practices.

A note on class actions in the era of AI

As the prevalence of AI expands, so does its intersection with class action litigation, particularly in the United States, where legal trends often predict the trajectory of the Canadian class action field. AI's integration into various industries has brought new AI-related class actions in areas like data protection, antitrust and competition law, copyright, and securities.

In the realm of privacy law and data protection, AI systems are increasingly scrutinized for potential violations, especially as they process vast amounts of personal data. Class actions targeting companies for data breaches or improper data handling practices are on the rise, often hinging on the workings of AI algorithms. Similarly, in antitrust cases, AI's role in price-fixing, market manipulation, and collusion presents new grounds for class action.

The copyright domain also faces disruptions, with AI-generated content blurring the lines of ownership and infringement. Class actions in this area often intersect with questions of authorship and the misuse of copyrighted materials by AI. Furthermore, AI’s impact on financial markets has given rise to securities class actions, where plaintiffs argue that AI-driven trading or analysis has led to financial losses of securities fraud.

As Canada often looks to U.S. litigation as a predictor, these developments signal a future where AI and class actions will intersect, shaping the legal landscape on both sides of the border. We will continue to monitor new class actions being filed, with some already initiated in Québec.

Recommendations for businesses

Even where organizations are not directly subject to the international requirements and instruments above, they are likely to be impacted indirectly as they influence pending and new legislation in Canada, contractual requirements of international customers, vendors, or partners, and industry best practices. We recommend that businesses operating in Europe in particular should take note of these developments and work towards ensuring that their internal AI governance frameworks are aligned with anticipated laws and regulations.


This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.

To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2024 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now