Sustainable AI Regulation

by   Philipp Hacker, et al.

This paper suggests that AI regulation needs a shift from trustworthiness to sustainability. With the carbon footprint of large generative AI models like ChatGPT or GPT-4 adding urgency to this goal, the paper develops a roadmap to make AI, and technology more broadly, environmentally sustainable. It explores two key dimensions: legal instruments to make AI greener; and methods to render AI regulation more sustainable. Concerning the former, transparency mechanisms, such as the disclosure of the GHG footprint under Article 11 AI Act, could be a first step. However, given the well-known limitations of disclosure, regulation needs to go beyond transparency. Hence, I propose a mix of co-regulation strategies; sustainability by design; restrictions on training data; and consumption caps. This regulatory toolkit may then, in a second step, serve as a blueprint for other information technologies and infrastructures facing significant sustainability challenges due to their high GHG emissions, e.g.: blockchain; metaverse applications; and data centers. The second dimension consists in efforts to render AI regulation, and by implication the law itself, more sustainable. Certain rights we have come to take for granted, such as the right to erasure (Article 17 GDPR), may have to be limited due to sustainability considerations. For example, the subjective right to erasure, in some situations, has to be balanced against the collective interest in mitigating climate change. The paper formulates guidelines to strike this balance equitably, discusses specific use cases, and identifies doctrinal legal methods for incorporating such a "sustainability limitation" into existing (e.g., Art. 17(3) GDPR) and future law (e.g., AI Act). Ultimately, law, computer science and sustainability studies need to team up to effectively address the dual large-scale transformations of digitization and sustainability.


page 1

page 2

page 3

page 4


Aligning Explainable AI and the Law: The European Perspective

The European Union has proposed the Artificial Intelligence Act intendin...

The AI Act proposal: a new right to technical interpretability?

The debate about the concept of the so called right to explanation in AI...

The European AI Liability Directives – Critique of a Half-Hearted Approach and Lessons for the Future

The optimal liability framework for AI systems remains an unsolved probl...

Regulating Gatekeeper AI and Data: Transparency, Access, and Fairness under the DMA, the GDPR, and beyond

Artificial intelligence is not only increasingly used in business and ad...

The Dangers of Computational Law and Cybersecurity; Perspectives from Engineering and the AI Act

Computational Law has begun taking the role in society which has been pr...

Regulating ChatGPT and other Large Generative AI Models

Large generative AI models (LGAIMs), such as ChatGPT or Stable Diffusion...

Terms-we-Serve-with: a feminist-inspired social imaginary for improved transparency and engagement in AI

Power and information asymmetries between people and digital technology ...

Please sign up or login with your details

Forgot password? Click here to reset