Authors
Jacob Hinton
Recent hype around generative AI has led to a flurry of activity in the legal profession. Litigators are considering how AI can be used to increase the value and efficiency of their work, while mitigating the risks it poses. Law societies are issuing directions on using Gen AI in legal practice, while reminding lawyers of their professional obligations1.
Now, all eyes are turning to the courts: what role(s) will AI play in the courtroom?
The use of Gen AI in court proceedings made headlines earlier this year when a lawyer referred to cases “hallucinated” by ChatGPT in a notice of application. Holding the lawyer personally liable for additional expenses incurred as a result of her mistake, the British Columbia Supreme Court observed that “generative AI is still no substitute for the professional expertise that the justice system requires of lawyers” and that “competence in the selection and use of any technology tools, including those powered by AI, is critical” to the integrity of the justice system2.
In a similar case in the United States, Mata v. Avianca, a New York District Court said that while reliable AI tools can provide useful assistance, lawyers are required to do their due diligence to ensure that submissions are correct. Fake submissions waste time, deprive clients of legitimate citations to support their case, and lower the reputations of those involved in the case—and the judicial system as a whole3.
Some Canadian courts have communicated their expectations on the use of Gen AI in court materials. For example:
Similar guidance has been issued by the Superior Court of Québec, the Supreme Court of Newfoundland and Labrador, the Provincial Court of Nova Scotia, and the Supreme Court of Yukon.
Canadian courts have also started to consider and respond to speculations about their own use of AI. For example:
United States
Some U.S. judges have expressed enthusiasm for using AI to assist with decision-making. In a concurrence in James Snell v. United Specialty Insurance Company, for example, Judge Newsom of the 11th Circuit announced that he used Gen AI to confirm his preliminary interpretation of the word “landscaping” in an insurance policy. He concluded that it is no longer ridiculous to think that AI-powered large language models like ChatGPT “might have something useful to say about the common, everyday meaning of the words and phrases used in legal texts”4. For more about the use of AI in legal contexts, read “What are the new best practices for AI for legal teams?”
United Kingdom, New Zealand, and Australia
In December 2023, the United Kingdom Courts and Tribunals Judiciary released guidance for judicial office holders on the responsible use of AI in courts and tribunals. The guidelines state that “[a]ny use of AI by or on behalf of the judiciary must be consistent with the judiciary’s overarching obligation to protect the integrity of the administration of justice”5. While the guideline explains that AI can help with summarizing, writing presentations, and performing administrative tasks, it does not recommend AI for legal research and analysis.
Similar guidelines were released by the courts of New Zealand. Rather than discouraging use of AI for legal research and analysis, however, the New Zealand guidelines simply state that extra care is required.
The Australasian Institute of Judicial Administration has also released a guide on AI decision-making for judges, tribunal members, and court administrators. The guide provides an overview of AI tools, suggests where AI can be utilized by decision-makers, and discusses their impacts on core judicial values of open justice, accountability and equality before the law, procedural fairness, access to justice, and efficiency.
While it is unlikely that AI will be running Canadian courtrooms any time soon, we will likely see it playing an increasing role in the background of litigation processes, on both sides of the bench. Litigants and lawyers should therefore be aware of risks, follow courts’ guidance, and abide by the basic duties and principles that guide and govern legal disputes—no matter what technological transformations are to come.
This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.
To discuss these issues, please contact the author(s).
This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.
For permission to republish this or any other publication, contact Janelle Weed.
© 2024 by Torys LLP.
All rights reserved.