8 August 2025
Mistral published Our contribution to a global environmental standard for AI, a “first-of-its-kind comprehensive study to quantify the environmental impacts of our LLMs”.
Mistral AI Infographie Acv V6(1).
What’s interesting:
- looking at the diagram, most of the the emissions are in model training and inference (inference = usage). The inference emissions cover 18 month of use. It’s a shame they’re not detailed separately from model training, but we can guess that most of the emissions are inference - and the the same will probably be true for all AI services.
- training and 18 months of usage footprint: 20,400 tCO2e, 281,000 m3 of water consumed, 660 kg Sb eq - this is a standard unit for measuring non-living resource depletion: kg antimony equivalents per year
- marginal impacts of inference, more precisely the use of our AI assistant Le Chat for a 400-token response - excluding users’ terminals: 1.14 gCO2e, 45 mL of water, and 0.16 mg of Sb eq. (Compare to Sam Altman’s recent comments on OpenAI: “the average query uses about 0.34 watt-hours [and] about 0.000085 gallons of water”.)
- “Our study also shows a strong correlation between a model’s size and its footprint. Benchmarks have shown impacts are roughly proportional to model size: a model 10 times bigger will generate impacts one order of magnitude larger than a smaller model for the same amount of generated tokens. This highlights the importance of choosing the right model for the right use case.”
Elsewhere in AI emissions:
Previously: Do the AIs know their carbon emissions? No. and Working notes: carbon emissions of AI