Science Insight: Who is responsible for AI’s environmental cost?  - Explained

We explore the scientific background, research findings, and environmental impact of Science Insight: Who is responsible for AI’s environmental cost? – Explained

Artificial intelligence (AI) consumes vast amounts of energy, water, and material resources. A new academic review warns that while concern about its carbon footprint is growing, the ethical foundations guiding that debate remain thin and uneven.

That warning comes from the study The ethics in sustainable AI: a scoping literature review on normativity in the academic discourse on the environmental sustainability of AI, published in the journal AI & Society. The paper maps how scholars frame AI’s environmental impact, who they assign responsibility to, what solutions they prioritize, and which ethical principles, if any, underpin those proposals.

Energy dominates the debate, while water and materials are overlooked

The review finds that most academic discussions of sustainable AI focus primarily on energy consumption and greenhouse gas emissions. Training large-scale machine learning models and operating data centers are frequently cited as key contributors to rising electricity demand. As AI adoption expands across sectors, the energy intensity of model development has become a central point of concern.

However, the authors argue that this energy-centric framing obscures other environmental impacts. Water consumption, particularly for cooling data centers, receives minimal attention in the literature. Material resource extraction, including the mining of critical minerals required for hardware production, is also underexplored. Electronic waste and hardware lifecycle impacts are acknowledged in some studies but remain far less prominent than carbon emissions.

The result, according to the review, is a fragmented environmental narrative. Only a small number of publications address water use and material extraction together, suggesting that most research treats environmental dimensions in isolation rather than as interconnected systems. This narrow framing risks underestimating the broader ecological footprint of AI infrastructure.

The authors also note that studies vary in whether they focus on the training phase of AI models or on long-term deployment and inference impacts. Emphasis on model training often leads to calls for more energy-efficient algorithms, while attention to deployment raises questions about cumulative system-level effects as AI becomes embedded across industries.

A technofix orientation shapes proposed solutions

When examining proposed solutions, the review identifies a clear pattern: technical optimization dominates the discourse. Nearly half of the solution proposals center on improving software efficiency, reducing computational intensity, or designing more energy-efficient algorithms. Hardware improvements and infrastructure efficiency measures are also discussed, though less frequently.

Renewable energy sourcing for data centers appears in a smaller subset of recommendations. While shifting to low-carbon electricity can reduce emissions, the authors caution that this does not eliminate other environmental impacts, including water use and material extraction.

Some studies propose process-oriented interventions such as transparency requirements, environmental reporting standards, and reflective development practices. A smaller group calls for regulatory or governance measures. However, these institutional approaches remain secondary to optimization strategies.

The authors characterize this pattern as a technofix orientation. Environmental sustainability is often framed as a problem solvable through efficiency gains, rather than as a systemic ethical challenge requiring broader political, social, and economic reflection. This emphasis reflects the dominance of computer science and engineering perspectives within the sustainable AI literature.

By focusing primarily on computational efficiency, the debate risks overlooking questions about scale, necessity, and distribution. For example, whether all AI applications are environmentally justified, or how benefits and burdens are shared across regions and populations, receives comparatively limited attention.

Responsibility and the thinness of ethical grounding

The review also examines how responsibility for AI’s environmental impact is distributed across actors. Developers and technical researchers are most frequently identified as responsible for mitigating environmental harm. Policymakers are mentioned less often, and end users are rarely assigned significant responsibility.

This concentration of responsibility upstream in the development process narrows the accountability frame. It suggests that sustainability is largely a matter of design choice, rather than also involving governance structures, market incentives, consumer demand, and global supply chains.

Most important, however, is the limited presence of explicit ethical reasoning. A majority of reviewed publications treat environmental impact as self-evidently problematic without grounding their arguments in clearly articulated ethical theories. Few papers engage directly with moral philosophy or normative frameworks.

Where ethical considerations do appear, they tend to cluster around two themes. Some contributions implicitly adopt a utilitarian orientation, weighing environmental costs against societal benefits of AI deployment. Others emphasize justice, fairness, and equity, raising concerns about disproportionate environmental burdens and intergenerational impacts.

Justice-oriented discussions often critique purely efficiency-driven approaches, arguing that minimizing aggregate emissions does not necessarily address unequal exposure to environmental harm. Questions about who benefits from AI innovation and who bears ecological costs emerge as central but underdeveloped themes.

The authors argue that without clearer normative grounding, the sustainable AI debate risks remaining superficial. Efficiency gains alone cannot resolve deeper questions about ecological limits, technological necessity, or moral responsibility to non-human life and future generations.

Toward a broader ethical framework for sustainable AI

In response to these gaps, the study calls for greater interdisciplinary engagement. Philosophers, social scientists, environmental scholars, and policymakers must play a larger role in shaping sustainable AI discourse. Technical optimization is necessary but insufficient.

The authors advocate for expanding ethical frameworks beyond human-centered cost-benefit calculations. Relational and more-than-human perspectives, which consider ecological systems and non-human entities as morally significant, could broaden the scope of analysis. Such approaches challenge purely anthropocentric assumptions embedded in many current debates.

The paper also suggests that sustainability discussions should move beyond incremental improvements and consider structural questions. These include whether certain high-resource AI applications are justified, how to set limits on model scale, and how to ensure equitable access to the benefits of AI without externalizing environmental costs.