The short answer is yes: you can use artificial intelligence throughout the employment life cycle. Employers are increasingly leveraging artificial intelligence technology in the workplace. While AI has the potential to streamline operations and enhance efficiency, its use raises important legal considerations and obligations that present themselves throughout the employment lifecycle. What are the key employment risks and regulatory developments associated with the use of AI—and what are some of the risk mitigation strategies that employers may wish to consider?
The primary employment risks associated with the use of AI in the workplace can be separated into two broad categories: discrimination and privacy.
AI technology can give rise to a variety of discriminatory impacts, depending on how it is developed and deployed. AI technologies—particularly generative AI technologies—are susceptible to making decisions that unfairly disadvantage individuals or groups (including those protected under human rights or anti-discrimination laws) or creating content that perpetuates biases or stereotypes, using algorithms that may be based on biased data. Every jurisdiction in Canada and the United States has legislation that prohibits discrimination in employment based on certain protected characteristics1. Employers must be mindful of their obligations under these laws when implementing AI in the workplace, as they could face liability if their use of AI results in discrimination against current or prospective employees (whether or not the discrimination was intentional).
The collection, use and disclosure of personal information in Canada by private businesses is protected by the federal Personal Information Protection and Electronic Documents Act (PIPEDA) and substantially similar provincial legislation in Québec, Alberta and British Columbia. As AI systems are developed by consuming large amounts of data, which may include the collection of personal information, it is important for employers to be aware of their obligations under privacy laws and to ensure that their use of AI does not violate their employees’ privacy rights (for more on employment considerations, read “What should be included in my organization’s AI policy?: A data governance checklist”).
Although existing human rights and privacy legislation may address some of the issues associated with AI, there have been increasing efforts to regulate its use, including in the workplace, in both Canada and the United States.
At the federal level, Bill C-27, which proposes new legislation relating to consumer privacy, data protection, and the governance of AI systems in Canada, was introduced in 2022. If passed, the legislation aims to set an overarching legislative goal of protecting individuals against a range of serious risks associated with the use of AI, including risks of physical or psychological harm or biased output causing adverse impacts on individuals. This goal is then to be supported by clear and tangible regulatory requirements prescribed by future regulations.
On a provincial level, Ontario recently passed legislation which requires employers to disclose in publicly advertised job postings whether AI is being used in the hiring process to screen, assess or select candidates (while passed, a date has not yet been set for the requirement to come into force). In Québec, recent privacy reforms, known as Law 252, impose transparency requirements on systems that are used to make entirely automated decisions about an individual, and consent requirements for technologies used to profile individuals.
In 2023, President Biden issued an executive order that provided guiding principles regarding the development and use of AI in federal agencies, including in the context of employment. Though the executive order does not directly apply to private industries except in certain limited circumstances, it is expected to influence private sector employment policies, including by the incorporation of new standards and requirements in federal contracts. The Department of Labor has also issued technical guidance for private employers in circumstances where AI is used to make employee hiring, retention and promotion decisions in order to prevent disparate impacts on protected groups.
Further, a number of state and local legislatures have proposed laws regulating the use of AI in private sector employment, though few have been passed into law. Notable exceptions include Maryland, Illinois, and New York City, which have passed legislation aimed at curbing discrimination resulting from, and increasing transparency regarding, the use of AI in the recruitment and hiring process.
To mitigate the human rights and privacy risks described above, it will be critical for employers to both ensure compliance with existing regulation and legislation, and to stay informed about, and compliant with, new and developing legal obligations to employees as it relates to the use of AI.
Employers may also wish to consider the following measures:
For more on responsibly integrating AI into your business practice, read The board says we need an AI strategy. How do we start?
We will continue our conversation on this topic this fall during a three-part webinar series, followed by the release of our comprehensive guide titled “AI for Employers”. We encourage you to join us over the coming months to learn more about the legal implications of using AI in the workplace.
This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2024 by Torys LLP.
All rights reserved.