AI will not be judging you: Canadian Judicial Council issues guidance on the use of AI in Canadian courts

Authors

 
As the legal profession grapples with the ethical and practical issues raised by the adoption of generative AI, lawyers have looked to the courts for guidance. Although a small number of practice directions have been issued by individual courts regarding the use of AI in judicial proceedings, the Canadian Judicial Council’s (CJC’s) Guidelines for the Use of Artificial Intelligence in Canadian Courts provide a clear statement of the judiciary’s approach to the use of AI—stating, crucially, that AI cannot supplant judges’ exclusive responsibility for decision-making1.

What you need to know

  • The CJC has issued guidelines for the use of AI in Canadian courts. These guidelines are intended to raise awareness of the risks associated with AI tools while providing a principled framework for their use.
  • The guidelines promote key values including judicial independence, the ethical use of AI, safety, transparency, monitoring and continued education.

Overview

The CJC is the body of federally nominated Chief Justices and Associate Chief Justices of Canadian superior courts, which oversees judicial conduct and develops policies for the professional development of judges. The CJC periodically provides guidance to Canadian courts on ethical rules for judges in their use of technology.

Judicial decision-making should not be delegated to AI

Judges are responsible for making crucial decisions that shape nearly every aspect of Canadian society, and they cannot delegate that role to anyone—not clerks, not assistants, and certainly not AI. However, judges can leverage various supports to assist in their judicial responsibilities. For example, judges:

  • consult law clerks;
  • rely on administrative support for proofreading and document formatting;
  • use technology that assists with grammar and spell-check; and
  • employ speech tools for dictation.

The CJC’s guidelines are intended to help Canadian courts navigate the line between the permissible background use of AI in court processes and the impermissible delegating of judicial decision-making to AI.

Seven guiding principles for the use of AI in Canadian courts

The CJC’s guiding principles can be summarized as follows:

  1. Protect judicial independence. Any use of AI in Canadian courts must uphold the fundamental principle of judicial independence. The guidelines highlight the risk posed by commercially designed AI systems exerting influence on decision-makers, where their algorithms may prioritize efficiency over the interests of justice.
  2. Use AI consistently with core values and ethical rules. AI use by judges may be new, but the existing rules still apply: any use of tools or assistants (virtual or otherwise) must uphold the core values of the court and judicial ethics. These values include independence, integrity and respect, diligence and competence, equality and impartiality, fairness, transparency, accessibility, timeliness and certainty.
  3. Have regard to the legal aspects of AI use. Any integration of AI into court processes must be done with consideration for the law. At all stages of AI use, courts must consider the implications it may have on intellectual property and privacy rights—including in relation to the data used to train the model, any confidential information used in prompts, and AI outputs.
  4. AI tools must be subject to stringent information security standards and output safeguards. Courts should be attuned to the risks associated with AI use, including the exposure of sensitive training data from sealed court files, tampering and leaking of personal information. Courts should have robust information and cybersecurity programs in place and give special consideration to addressing AI-specific threats.
  5. AI decision-making should be explainable. Judicial accountability requires the AI to provide understandable explanations for its output. This is necessary to protect public confidence in the administration of justice and fairness in appeals.
  6. Courts must regularly track the impact of AI deployments. Court administrators should perform comprehensive assessments of AI’s potential impact on judicial independence, workload, backlog reduction, privacy, security, access to justice, and the court’s reputation. The guidelines recommend establishing a pilot project or controlled testing environment before undergoing a full-scale deployment of an AI tool. Following this initial period, courts should periodically assess the efficacy of its AI tools as they evolve.
  7. Courts must develop a program of education and provide user support. Courts must provide comprehensive education, ensure continuous training, and provide technical support to judges using AI.

Conclusion

The legal profession will continue to watch in anticipation as judges assess how AI will be used in court processes. But these guidelines make one thing clear: AI will not be taking over the role of Canadian judges as decision-makers. Any opportunities to use AI to improve the administration of justice must preserve judicial independence, maintain public confidence in the justice system, and uphold the basic tenets of the law.


To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2024 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now