Consumer AI services like chatGPT, Bing and Bard are chatbot-like tools that use language processing models based on AI technology to answer questions. In recent months, these tools have exploded in popularity, being typically free, easy to access through a browser, user-friendly—and in many cases quite impressive in what they can produce in response to question prompts.
The popularity of these tools as well as their early stage of development and recent nascent regulatory action have raised a number of concerns about how employees may use these tools in the workplace. The concerns range from confidentiality and privacy to accuracy, workplace harassment and work product ownership.
We set out below key considerations for employers in creating guidance on the use of these services in the workplace.
Although even Members of Parliament are now using chatGPT to write their questions for House of Commons speeches (and comment on its accuracy or inaccuracy in doing so)1, regulators in Canada and abroad are raising concerns about what happens to the data that these tools collect. The Italian privacy regulator issued a ban on chatGPT in March 2023 pending an investigation into compliance with the EU privacy law, and the Canadian Office of the Privacy Commissioner opened an investigation into the same tool in April following a consumer complaint. The Federal Trade Commission is reviewing a similar complaint in the U.S.
In addition, there are broader concerns about the impact on companies when employees use these tools to assist in their job duties. Examples of these impacts include:
Businesses in Canada and the U.S. should consider creating policies, guidance or training tools on the appropriate use of consumer AI tools in the workplace to mitigate the above risks. Some organizations may choose to prohibit the use of these tools for business purposes unless they are integrated into software that has been vetted and licensed by the company. Others may focus on educating employees of the risks and identifying potentially appropriate uses. Such guidance should consider the following elements:
Employee guidance should explain why the use of consumer AI may create risks for the company and its staff. For example, employees may not understand that the information they input as prompts for these tools may be used indefinitely in the model, for various undefined purposes outside the control of the company, and may be easy to associate back to the company, its customers or its employees. They may not make the connection between the use of these services and their employee confidentiality obligations.
Similarly, employees may not appreciate that the seemingly sophisticated answers provided by these tools may be inaccurate. The difference between these tools and company-vetted research products should be explained so that employees understand the risks of using them to assist in their work.
Employers should reiterate the application of their existing privacy, confidentiality, acceptable IT use and workplace conduct policies and explain how they may limit or prohibit the use of consumer AI tools, and how the disciplinary consequences discussed in those policies may apply to unauthorized use of these services.
Employers should consider whether awareness and policies are sufficient in ensuring compliance with their position on consumer AI, or whether monitoring, audits, attestations of compliance or restricting access to the tools’ websites are needed.
As mentioned above, employers should be mindful of how advances in consumer AI may change the nature of common workplace complaints. Companies should consider whether their workplace conduct training and investigation processes should be updated to address these technological developments.
Given the rapid development of both the technology and the regulatory response to consumer AI, employers should be clear in their internal communications that workplace guidance may change. Companies should also designate a person responsible for updating such policies and controls as the risk landscape evolves, such as in the event a regulator prohibits the use of such a tool within the country.
While many businesses will see advantages to some employee use of consumer AI services, the scope of any permitted use should be clearly explained, and organizations should be prepared to adjust their practices as this technology and regulatory landscape develops.
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2024 by Torys LLP.
All rights reserved.