What about the environmental impacts of AI?
Like any industrial activity, the creation and use of AI has an environmental cost. The two concerns that come up most often are greenhouse gas (GHG) emissions and water usage.[1] As of early 2025, these impacts are small at the individual level and moderate at the global level, but the global impacts are expected to increase in the future.
Calculating the impact
The vast majority of AIs are run in datacenters, which require electricity and water to operate. Water usage includes direct cooling[2], where water is evaporated to cool the computers, but also the water consumed in generating the electricity.
Most of the research on the environmental impacts of AI focuses on power use, but power use can be used as a basis for estimating water use and GHG emissions as well. By comparing the total worldwide electricity use of datacenters to their global water draw, the US Department of Energy found that about 5 mL of water are evaporated per Wh of energy used by datacenters. The amount of GHG emissions produced by electricity generation varies substantially by country and even by state, but we’ll use here the American average of 0.4 g of CO2 per Wh.
Impact of individual LLM queries
Let’s start small and check the energy use of a single LLM[3] query. For instance, should you worry that using ChatGPT to help you write a single email will use an unreasonable amount of power or water?
For popular models, training and inference use comparable amounts of compute, so as a first approximation of the resource cost, we can look only at inference. Past models[4] had been reported to consume about 4 Wh per page of text produced. Epoch produced a more recent analysis of GPT-4, which powers most ChatGPT queries, and found that the power draw was about 0.3 Wh for a typical query, one order of magnitude smaller than previous estimates.[5] [6] This power draw translates to about 2 ml of water use and 0.1 g of CO2-equivalent.[7]
Is that a lot? Not really.[8] The request to GPT-4 uses about one hundredth of the energy needed to boil a cup of water, and emits a similar amount of GHGs to driving 0.5 meters.
We often underestimate how much water is used in everyday activities and products we buy. The 2 ml of water used to power a single ChatGPT query is about 150,000 times less than growing a head of lettuce, and about ten million times less than producing a kg of beef or chocolate. It is also dwarfed by the water needed to produce everyday items, such as the 2700 L needed to produce a t-shirt.
For the average American, adding 10 000 extra queries per day to ChatGPT (one query every 8 seconds) would increase their total carbon footprint and water usage by about 5% and 0.5% respectively. One would need to query ChatGPT every second[9] to match the power and water usage of watching TV, which is itself orders of magnitude smaller than the effects of eating a beef hamburger.
Broader impact
Looking out, what’s the broader environmental impact of AI today?[10] We must first look at the impact of datacenters more broadly.
One source estimates that in 2023 datacenters were using 4% of US electricity[11] [12] and responsible for 2% of US GHG emissions. Regarding water, the Department of Energy calculates that datacenters in the US consumed about 800 billion litres in 2023, which corresponds to about 0.5% of total US water draw.
But most of the resource use of datacenters is not used to power AI. Alex de Vries estimates that in 2023, AI energy use was under 2.5% of total datacenter energy use. Another source estimates that it was 2% in Q1 2024, but that it could grow to 7% by 2025. So as of 2025, we can estimate that AI’s impact is about one order of magnitude smaller than the numbers for datacenters.
Impact on communities
The environmental cost may be manageable both at the global and individual level, but they may be severe at the local level, as both chip fabrication and datacenters require centralized installations that sometimes cause local stress on water supplies and energy grids. If the number and size of these installations were to grow as they are expected to, impacted localities might have to adapt their infrastructure or simply ban the building of such installations.
Future AI use
AI has been moving very fast in the last few years, and 2024 has seen the rise of reasoning models that use substantially more compute at inference time. These models are currently only used for a small fraction of queries, but if this were to change, it might make sense to start looking at the impact of individual queries again.
So AI’s environmental footprint as of 2025 is quite modest, but AI usage is growing fast as it becomes more integrated in our everyday lives. Investment in datacenters for AI is increasing and there are plans to expand the power grid[13] to power these new datacenters.
We don’t know what the future will bring. New techniques could be developed that could increase or decrease the energy use, and more fundamentally, if AI ends up transforming society, the picture might change in unpredictable ways.
Other concerns include air pollution (which correlates with GHG emissions), land use, the impacts of mining for materials, and electronic waste. ↩︎
Some water is recirculated for cooling; this water is subtracted from total water usage to calculate water draw. ↩︎
We concentrate on LLMs here as they are the most common usage of modern AI, although Andy Masley argues that they probably only represent a small fraction of AI energy use. ↩︎
Sasha Luccioni’s 2022 analysis of BLOOM set the bar for 3-4 Wh as a widely-cited number for the energy use of individual queries to LLMs such as ChatGPT. This corresponds to about 20 ml of water per query. Note that the widely-cited 500 ml of water per query was a misrepresentation of that data, the real value with these numbers varied based on the datacenter used but was always much smaller than 500 ml. ↩︎
This can seem surprising since the size of models has generally been growing, but a combination of better training algorithms, more efficient chips, model distillation and inference efficiency improvements such as mixture-of-experts have been sufficient to push the inference costs down. ↩︎
Julien Delavande built a tool to check the real-time use of models, using open weight model Qwen2.5-7B-Instruct. ↩︎
We can do a sanity check on these small numbers by observing that LLM providers must pay for the electricity and water they use, and they transfer these costs to the customers using their services. If price per token is a good indicator of energy use, which it seems to be, the fact that the cost per token has been sharply dropping is coherent with more recent analysis pointing to lower inference costs. Since it costs users much less than 1 cent to generate a page of text, this suggests that the total monetary cost of both the water and the energy for such a request cannot exceed 1 cent. It is in fact much smaller for most models. ↩︎
These articles use the old 4 Wh energy estimate, so all of their estimates should be reduced by one order of magnitude. ↩︎
Triple this rate if you are streaming 4k video. ↩︎
We concentrate on US data, but efficiency may vary substantially throughout the world, with e.g France producing 5-10x less carbon per Wh than the rest of the world. ↩︎
The total yearly US electricity consumption has been hovering around 4 * 10^15 Wh for over a decade. ↩︎
The majority of the planned new power plants are using low-carbon or renewable sources. In particular, this planned demand for energy has renewed interest in nuclear energy. ↩︎