Discussion about this post

User's avatar
Jessica S's avatar

Ooh! This is interesting. I still need to read his whole piece. I looked at the graph, and there in the middle is "asking chatGPT a question." Does he say how he came up with the data for this statistic? Because I was reading somewhere recently (and I wish I knew where) that being polite to AI is a huge waste of energy; adding three words to the beginning of a question "Would you please..." takes 50ml more water (was it really that much?) than leaving them off. So, tho he quantifies some other statistics with measurements, the question that is asked of chatGPT is unquantified.

In addition, his chart is in units of one. If comparing "asking chatGPT a question" to "10 min 4k video," there are simply a lot of differences in metrics. I mean, how many questions does this person ask chatGPT in 10 minutes? I can type fast! Shouldn't they be compared that way? And then there's the water usage of generative AI (asking for a composition) vs. simply asking for data. These require vastly different amounts of water, yet he doesn't quantify them in the chart.

But I'll search for that article I read that I thought had good quantitative measurements on water for input & output. Or maybe there's better information on it somewhere. I like reliable reference to primary resource for statistical data to enable me to check up on how others are using their statistics, so if that can be found, it would be awesome.

Expand full comment
Pepetto's avatar

A back of the envelope calculation to give another estimate for the LLM consumption including training:

Lets assume the H100 gpu represents most of the AI companies' compute.

Each is 700 watt max TDP, so (x8760 hour in the year) 6 million watt hours per year. Times 2 millions H100 sold in 2024 worldwide --> 12000 gigawatt hour.

(The GPUs are used 100% all the time, either for inferring or for training new models).

OpenAI has 500 millions "users" (couldn't find good sources for those, but that number keeps coming up). It's pretty likely that most people using LLM must have made an openAI account at one point.

So 12000gigawatthour divided by 500million users of LLM so 24000watt hour per user.

That's about equal to running a kitchen oven for 15 hours. Which is much more than I expected, but is still negligable when compared to ordering on amazon, eating meat or driving a car.

LLM has plenty of problems, being bad for the environment isn't one of them (yet).

I'll consider limiting my LLM use (which i do enjoy) if we've considered stopping those much worse other things before.

Expand full comment
9 more comments...

No posts