Does the age of generative AI challenge corporate sustainability targets?

 
Though AI has been touted as an extraordinary problem solver for organizations of all sizes and sectors, its significant environmental footprint creates challenges for businesses that are developing AI or using it in their operations or supply chain. Addressing these challenges must be a critical part of an overall climate strategy.

The impact of AI’s environmental footprint on businesses

Many studies show that the environmental impact of both developing and using AI systems, particularly large language models and other generative (Gen) AI systems with large computational demands, is significant1. This impact is primarily measured by water and energy consumption and carbon emissions. For instance, it is estimated that ChatGPT, which has over 200 million weekly active users2, needs nearly 10 times as much electricity for a single prompt as a simple Google search, and it consumes 500mL of water for every 5 to 50 prompts3. Accordingly, by one estimate, AI is poised to drive a 160–200% increase in data centre power demand by 20304 5.  AI is also expected to contribute to a material increase in e-waste globally6.

Although these figures may lessen over time as AI developers find ways to limit the environmental impact, there are reputational and other risks associated with these these new technologies. Multiple big tech companies engaged in developing AI systems have recently attracted scrutiny for the sudden and significant increases in energy consumption, water consumption and carbon emissions they reported in 2024. In some of these cases, publicly stated climate change targets announced before the AI boom have become more challenging to meet in light of this expanded AI development.

Transparency is crucial to avoid the reputational risk associated with walking back sustainability targets that no longer seem achievable in the age of AI.

The potential impact of AI on corporate climate change targets and other sustainability metrics isn’t limited to big tech companies. Organizations that employ generative AI in their operations (e.g., through leveraging tools from OpenAI, Microsoft or other developers) should be aware of the potentially significant impact on key sustainability metrics, including on an organization’s Scope 3 (indirect) supply chain emissions. Although emissions are significant at every phase of a Gen AI system’s life cycle, the greatest impact on emissions may come from the use of the generative AI system rather than its training or development7. For example, AI tools and applications with significant computational demands can lead to end-user devices across a company’s network consuming more overall energy by increasing device workloads.

New and emerging regulatory requirements

Both AI and climate change have been hot topics for lawmakers recently, and there are ongoing legislative reforms that may impact businesses when it comes to AI and environmental transparency:

  • In Canada, the recently passed Bill C-59 sets out rules for companies to substantiate certain environmental representations to the public. Though AI may be leveraged by businesses to help achieve sustainability goals, the Bill C-59 requirements should be considered in making any claims that current AI use is environmentally beneficial.
  • In the U.S., the Artificial Intelligence Environmental Impacts Act of 2024 was introduced in the Senate in February 2024. If passed, this would direct the U.S.’s National Institute of Standards and Technology (NIST) to create a voluntary framework for AI developers to report their environmental impacts. It will also direct an interagency study to measure the positive and negative environmental impacts of AI generally. Though this will not create any mandatory reporting measures for developers, it may be an indicator of potential future requirements in this space.
  • AI-specific regulations in Canada, the EU, and the U.S. are on the way. While there is no specific environmental disclosure requirement in them so far, there are significant transparency requirements of developers and deployers of AI systems—in particular, “high-impact” or “high-risk” systems. As the regulatory landscape develops, given the importance placed on transparency across jurisdictions, more specific regulation on AI sustainability disclosures could materialize (for more on AI regulations, read “What’s new with artificial intelligence regulation in Canada and abroad?”).

Recommendations for businesses

This is not to say that AI usage can only hurt ESG metrics—studies are also emerging that demonstrate how AI systems can be employed by businesses to improve sustainability and mitigate their environmental impacts in the longer term. AI systems can also be leveraged to assist efforts to set and implement sustainability targets, for example by assisting in evaluating various climate scenarios and their impacts on an organization. However, for now, the risks outlined above remain important considerations in building an effective and low-risk AI strategy and governance plan.

To mitigate these risks, businesses should consider their AI strategies in developing and publicly reporting on their climate change targets. They should be transparent about 1) the assumptions regarding their AI training, development activities and AI use, and 2) how those assumptions might influence the achievement of their sustainability targets and objectives. This might mean, for example, providing information about how early investments in AI tools and technologies have led to increased emissions or energy usage and explaining how these investments will result in longer-term improvements in sustainability. Transparency is crucial—not just to anticipate future regulatory requirements for AI-related transparency but also to avoid the reputational risk associated with walking back sustainability targets that no longer seem achievable in the age of AI (for more on AI governance, read “The board says we need an AI strategy. How do we start?”).

Particularly for those businesses developing or using “high-risk” or “high-impact” systems as defined by upcoming AI legislation, we recommend creating a robust internal AI governance framework that facilitates compliance with anticipated requirements, establishes methods to forecast and quantify the environmental impact of AI development and use, and considers strategies for the public reporting of AI-related sustainability impacts.


This article was published as part of the Q4 2024 Torys Quarterly, “Machine capital: mapping AI risk”.

To discuss these issues, please contact the author(s).

This publication is a general discussion of certain legal and related developments and should not be relied upon as legal advice. If you require legal advice, we would be pleased to discuss the issues in this publication with you, in the context of your particular circumstances.

For permission to republish this or any other publication, contact Janelle Weed.

© 2024 by Torys LLP.

All rights reserved.
 

Subscribe and stay informed

Stay in the know. Get the latest commentary, updates and insights for business from Torys.

Subscribe Now