| ▲ | NohatCoder 5 hours ago | |||||||
So let us compare AI to aviation. Globally aviation accounts for approximately 830 million tons of CO₂ emission per year [1]. If you power your data centre with quality gas power plants you will emit 450g of CO₂ per kWh electricity consumed [2], that is 3.9 million tons per year for a GW data centre. So depending on power mix it will take somewhere around 200 GW of data centres for AI to "catch up" to aviation. I have a hard time finding any numbers on current consumption, but if you believe what the AI folks are saying we will get there soon enough [3]. As for what your individual prompts contribute, it is impossible to get good numbers, and it will obviously vary wildly between types of prompts, choice of model and number of prompts. But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂. Now, if this new tool allowed us to do amazing new things, there might be a reasonable argument that it is worth some CO₂. But when you are a programmer and management demands AI use so that you end up doing a worse job, while having worse job satisfaction, and spending extra resources, it is just a Kinder egg of bad. [1] https://ourworldindata.org/grapher/annual-co-emissions-from-... [2] https://en.wikipedia.org/wiki/Gas-fired_power_plant [3] https://www.datacenterdynamics.com/en/news/anthropic-us-ai-n... | ||||||||
| ▲ | 0biodiversity 4 hours ago | parent | next [-] | |||||||
> But I am fairly certain that someone whose job is prompting all day will generally spend several plane trips worth of CO₂. I dont know about gigawatts needed for future training, but this sentence about comparing prompts with plane trips looks wrong. Even making a prompt every second for 24h amounts only for 2.6 kg CO2 on some average Google LLM evaluated here [1]. Meanwhile typical flight emissions are 250 kg per passenger per hour [2]. So it must be parallelization to 100 or so agents prompting once a second to match this, which is quite a serious scale. [1] https://cloud.google.com/blog/products/infrastructure/measur... | ||||||||
| ||||||||
| ▲ | leptons 5 hours ago | parent | prev [-] | |||||||
When they stopped measuring compute in TFLOPS (or any deterministic compute metric) and started using Gigawatts instead, you know we're heading in the wrong direction. https://nvidianews.nvidia.com/news/openai-and-nvidia-announc... | ||||||||