A new study published this month is throwing cold water on some of the technology industry's most ambitious climate promises — and raising a question that researchers, policymakers, and citizens can no longer afford to postpone: is artificial intelligence, on balance, a weapon against climate change, or one of its accelerants?

The answer, inconveniently, appears to be both. And the difference between which it becomes may hinge entirely on decisions being made right now, in boardrooms and legislatures, that the public is barely aware of.

The Greenwashing Report

On February 17, a report by Beyond Fossil Fuels — a German non-profit — examined over 150 climate-related claims from the world's biggest AI companies, including Google, Microsoft, and Amazon, as well as institutions like the International Energy Agency. The findings were damning. Only 26% of the climate benefit claims examined cited peer-reviewed academic papers as evidence. Another 36% cited no evidence whatsoever. The remainder leaned on corporate reports, media articles, or unpublished work.

"The evidence for massive climate benefits of AI is weak, whilst the evidence of substantial harm is strong." — Beyond Fossil Fuels Report, February 2026

The report's authors found no single verified example in which a generative AI system — ChatGPT, Gemini, Copilot — had produced a "material, verifiable and substantial" reduction in real-world emissions. Meanwhile, a January 2026 study in the journal Patterns estimated that data centres alone may have emitted between 32.6 million and 79.7 million tonnes of CO₂ in 2025 — roughly equivalent to the annual emissions of a small European country.

36% of Big Tech climate AI claims cite no evidence
~56M tonnes CO₂ from AI data centres in 2025 (estimated midpoint)
5% potential global emissions reduction by 2035, per IEA

But the Real Breakthroughs Are Different

Here is where the picture becomes genuinely complicated — and more interesting. The AI applications that may meaningfully move the needle on climate are almost never the large language models dominating public conversation. They are narrow, physics-informed, purpose-built tools operating largely out of the public eye.

Take Aurora, a foundation model for Earth systems developed by an international research team and published in Nature in 2025. Trained on over a million hours of atmospheric, oceanic, and environmental data, Aurora can produce faster, cheaper, and more accurate forecasts for air quality, ocean waves, and extreme weather than any conventional supercomputer-based model. Its developers argue it could fundamentally shift societies from reactive crisis response to proactive climate resilience — predicting floods, heatwaves, and tropical cyclones with enough lead time to save lives and protect infrastructure.

Or consider what the World Economic Forum calls "Planetary Intelligence" — a new class of AI being discussed at Davos in 2026 that would fuse satellite imagery, physical sensor networks, and machine learning into something resembling a continuous, real-time model of the Earth itself. Not a static report published months after the fact, but a living system that watches rivers swell, glaciers recede, and crop yields fluctuate, and updates its predictions accordingly. The WEF calls it an architecture for a "planetary-scale mind" — anchored in rivers and roads, crops and clouds, ice sheets and cities.

"Despite the prevailing narrative, AI is neither a climate villain nor a climate savior. It's a catalyst." — Atmos Magazine, January 2026

The Uncomfortable Arithmetic

The International Energy Agency has estimated that AI could reduce global emissions by up to 5% by 2035 by accelerating innovations in the energy sector alone — potentially offsetting the emissions generated by its own data centres many times over. But the Beyond Fossil Fuels report warns that this IEA figure is itself based on "extrapolation" and optimistic assumptions rather than observed, verified reductions.

What is not in dispute is the energy trajectory of AI infrastructure. Some of the new hyperscale data centres proposed to support next-generation AI require one gigawatt or more of power — the equivalent output of an entire nuclear power plant, for a single facility. MIT Technology Review's 2026 Breakthrough Technologies list included "hyperscale AI data centres" not as an unambiguous positive, but as a technology transforming global energy demand in ways that demand serious scrutiny.

The honest accounting looks something like this: generative AI, as currently deployed, is an energy-intensive consumer of electricity whose benefits to the climate remain largely unmeasured and unverified. Narrow, scientific AI — the kind being used to predict weather, design batteries, model climate systems, and discover low-carbon materials — is a genuinely powerful tool for addressing the crisis, but it is not what most people mean when they say "AI."

What Comes Next

Tara O'Shea, managing director of the Natural Climate Solutions Initiative at Stanford's Woods Institute for the Environment, has been working with AI-based environmental tools since 2017. "I do feel like public discourse around AI is focused solely on large language models and their consumer applications," she told Atmos magazine in January. "The reality in my world is that machine learning is so much more than that. We can make datasets talk to each other in new ways, to find correlations and insights that we had previously been missing."

That distinction — between AI as a product and AI as a scientific instrument — may be the most important clarification of the decade. The former is growing rapidly, consuming energy, and generating unverified climate claims. The latter is quietly, methodically changing what humanity knows about its own planet and how fast it can act on that knowledge.

The question of whether AI saves the planet it helps to warm will not be answered by any single study or headline. It will be answered by regulation that holds companies accountable for climate claims; by investment decisions that direct compute resources toward verified impact; and by citizens who demand clarity about which version of AI is actually being built, and for whom.