Can HR use AI to recruit, manage, and evaluate employees?

 
The short answer is yes: you can use artificial intelligence throughout the employment life cycle. Employers are increasingly leveraging artificial intelligence technology in the workplace. While AI has the potential to streamline operations and enhance efficiency, its use raises important legal considerations and obligations that present themselves throughout the employment lifecycle. What are the key employment risks and regulatory developments associated with the use of AI—and what are some of the risk mitigation strategies that employers may wish to consider?

Risks associated with AI in the workplace

The primary employment risks associated with the use of AI in the workplace can be separated into two broad categories: discrimination and privacy.

Discrimination

AI technology can give rise to a variety of discriminatory impacts, depending on how it is developed and deployed. AI technologies—particularly generative AI technologies—are susceptible to making decisions that unfairly disadvantage individuals or groups (including those protected under human rights or anti-discrimination laws) or creating content that perpetuates biases or stereotypes, using algorithms that may be based on biased data. Every jurisdiction in Canada and the United States has legislation that prohibits discrimination in employment based on certain protected characteristics1. Employers must be mindful of their obligations under these laws when implementing AI in the workplace, as they could face liability if their use of AI results in discrimination against current or prospective employees (whether or not the discrimination was intentional).

Privacy

The collection, use and disclosure of personal information in Canada by private businesses is protected by the federal Personal Information Protection and Electronic Documents Act (PIPEDA) and substantially similar provincial legislation in Québec, Alberta and British Columbia. As AI systems are developed by consuming large amounts of data, which may include the collection of personal information, it is important for employers to be aware of their obligations under privacy laws and to ensure that their use of AI does not violate their employees’ privacy rights (for more on employment considerations, read “What should be included in my organization’s AI policy?: A data governance checklist”).

The current regulatory landscape for AI use in the workplace

Although existing human rights and privacy legislation may address some of the issues associated with AI, there have been increasing efforts to regulate its use, including in the workplace, in both Canada and the United States.

Canada

At the federal level, Bill C-27, which proposes new legislation relating to consumer privacy, data protection, and the governance of AI systems in Canada, was introduced in 2022. If passed, the legislation aims to set an overarching legislative goal of protecting individuals against a range of serious risks associated with the use of AI, including risks of physical or psychological harm or biased output causing adverse impacts on individuals. This goal is then to be supported by clear and tangible regulatory requirements prescribed by future regulations.

On a provincial level, Ontario recently passed legislation which requires employers to disclose in publicly advertised job postings whether AI is being used in the hiring process to screen, assess or select candidates (while passed, a date has not yet been set for the requirement to come into force). In Québec, recent privacy reforms, known as Law 252, impose transparency requirements on systems that are used to make entirely automated decisions about an individual, and consent requirements for technologies used to profile individuals.

While AI has the potential to streamline operations and enhance efficiency, its use raises important legal considerations and obligations that present themselves throughout the employment lifecycle.
United States

In 2023, President Biden issued an executive order that provided guiding principles regarding the development and use of AI in federal agencies, including in the context of employment. Though the executive order does not directly apply to private industries except in certain limited circumstances, it is expected to influence private sector employment policies, including by the incorporation of new standards and requirements in federal contracts. The Department of Labor has also issued technical guidance for private employers in circumstances where AI is used to make employee hiring, retention and promotion decisions in order to prevent disparate impacts on protected groups.

Further, a number of state and local legislatures have proposed laws regulating the use of AI in private sector employment, though few have been passed into law. Notable exceptions include Maryland, Illinois, and New York City, which have passed legislation aimed at curbing discrimination resulting from, and increasing transparency regarding, the use of AI in the recruitment and hiring process.

Strategies to mitigate risk

To mitigate the human rights and privacy risks described above, it will be critical for employers to both ensure compliance with existing regulation and legislation, and to stay informed about, and compliant with, new and developing legal obligations to employees as it relates to the use of AI.

Employers may also wish to consider the following measures:

  • Take steps to ensure they fully understand how their AI tools work (including understanding the underlying algorithms) prior to implementation.
  • Have a “human in the loop” (i.e., allowing a human user to intervene and change the outcome of an event or process if necessary).
  • Conduct bias audits/assessments to ensure tools are not generating unintended discriminatory effects.
  • Impose appropriate contractual obligations (and indemnities) on AI vendors.
  • Disclose how AI is being used in the workplace or for employment purposes and how employee personal information is being used, obtaining consent where required.
  • Provide employees with clear information about how to object to the collection, use, or disclosure of their personal information, how to challenge decisions made about them, and how to exercise access rights.

For more on responsibly integrating AI into your business practice, read The board says we need an AI strategy. How do we start?

We will continue our conversation on this topic this fall during a three-part webinar series, followed by the release of our comprehensive guide titled “AI for Employers”. We encourage you to join us over the coming months to learn more about the legal implications of using AI in the workplace.


  1. In Ontario, for example, employees may not be discriminated against on the basis of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, record of offences, marital status, family status or disability.
  2. Law 25, also known as the Act to modernize legislative provisions as regards the protection of personal information or simply Bill 64.

This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.

To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2024 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now