| |
  | ▲ | ethmarks 2 days ago | parent | next [-] |   | I agree wholeheartedly. It irks me when people critique automation because it uses large amounts of resources. Running a machine or a computer almost always uses far less resources than a human would to do the same task, so long as you consider the entire resource consumptions. Growing the food that a human eats, running the air conditioning for their home, powering their lights, fueling their car, charging their phone, and all the many many things necessary to keep a human alive and productive in the 21st century are a larger resource cost than almost any machine/system that performs the same work. From an efficiency perspective, automation is almost always the answer. The actual debate comes from the ethical perspective (the innate value of human life).  |   | |
  | ▲ | myaccountonhn 13 hours ago | parent | next [-] |   | This is a bad argument. Even if a machine replaced my job, I'm still going to eat, run the aircon, charge my phone etc. and maybe do another job. So the energy used to do the job decreased, but the total energy usage is higher because I'm still using the same amount of energy, but now the machine is also using some amount energy that wasn't being used before. Efficiencies lead to less resources being used if your demand is constant, but if demand is elastic, it often leads to the total resource consumption increasing. See also: Jevons Paradox (https://en.wikipedia.org/wiki/Jevons_paradox).  |  |
  | ▲ | pepoluan a day ago | parent | prev | next [-] |   | Not ALL automation can be more efficient. Just ask Elon about his efforts to fully automate Tesla production. Same as A.I. Current LLM-based A.I.s are not at all as efficient as a human brain.  |  |
  | ▲ | runarberg 2 days ago | parent | prev [-] |   | I suspect you may be either underestimating how efficient our brains are at computing or severely underestimating how much energy these AI models take to train and run. Even including our system of comfort like refrigerated blueberries in January and AC cooling a 40° C heat down to 25° C (but excluding car commutes, because please work from home or take public transit) the human is still far far more energy efficient in e.g. playing go then alpha-go. With LLMs this isn’t even close (and we can probably factor in that stupid car commute, because LLMs are just that inefficient).  |   | |
  | ▲ | zelphirkalt 2 days ago | parent | next [-] |   | Hm, that gives me an idea: The next human vs engine matches in chess, go, and so on, should be set at a specific level of energy consumption of the engines, that's close or approximately that of an extremely good human player, like a world champion or at least grand master. Let's see how engines keep up then!  |   | |
  | ▲ | ethmarks 2 days ago | parent [-] |   | That sounds delightful. Get a Raspberry Pi or something connected to a power supply capped at 20 watts (approximate electricity consumption of the human brain). It has to be able to run its algorithm in less than the time limit per turn for speed chess. Then you'd have to choose an algorithm based on if it produces high-quality guesses before arriving at its final answer so that if it runs out of time it can still make a move. I wonder if this is already a thing?  |  
  |  |
  | ▲ | keeda a day ago | parent | prev | next [-] |   | Wait hold on, let's put some numbers on this. Please correct my calculations if I'm wrong. 1. The human brain draws 12 - 20 watts [1, 2]. So, taking the lower end, a task taking one hour of our time costs 12 Wh. 2. An average ChatGPT query is between 0.34 Wh - 3 Wh. A long input query (10K tokens) can go up to 10 Wh. [3] I get the best results by carefully curating the context to be very tight, so optimal usage would be in the average range. 3. I have had cases where a single prompt has saved me at least an hour of work (e.g. https://news.ycombinator.com/item?id=44892576). Let's be pessimistic and say it takes 3 prompts at 3 Wh (9 Wh) and 10 minutes (2 Wh) of my time prompting and reviewing to complete a task. That is 11 Wh for the same task, which still beats out the human brain unassisted! And that's leaving aside the recent case where I vibecoded and deployed a fully-tested endpoint on a cloud platform I had no prior experience in, over the course of 2 - 3 hours. I estimate it would have taken me a whole day just to catch up on the documentation and another 2 days tinkering with the tools, commands and code. That's at least an 8x power savings assuming an 8-hour workday!! 4. But let's talk data instead of anecdotes. If you do a wide search, there is a ton of empirical evidence that improves programmer productivity by 5 - 30% (with a lot of nuance). I've cited some here: https://news.ycombinator.com/item?id=45379452 -- there is no measure of the amount of prompt usage to estimate energy usage, but those are significant productivity boosts. Even the METR study that appeared to show AI coding lowering productivity also showed that AI usage broadly increased in idle-time in users. That is, calendar time for task completion may have gone up, but that included a lot of idle time where people were doing no cognitive work at all. Someone should run the numbers, but maybe it resulted in lower power consumption! --- But what about the training costs? Sure we've burned gazillions of GWh on training already, and the usual counterpoint is "what about the cost involved in evolution?" but let's assume we stopped training all models today. They will still serve all future prompts at the same power consumption rates discussed above. However every new human will take 15 - 20 years of education to get to be a novice in a single domain, followed by many more years of experience to become proficient. We're comparing apples and blueberries here, but that's a LOT of energy to even start becoming productive, but a trained LLM is instantly productive in multiple domains forever. My hunch is that if we do a critical analysis of amortized energy consumption, LLMs will probably beat out humans. If not already, soon with the rate of token costs plummeting all the time. [1] https://psychology.stackexchange.com/questions/12385/how-muc... [2] https://press.princeton.edu/ideas/is-the-human-brain-a-biolo... [3] https://epoch.ai/gradient-updates/how-much-energy-does-chatg...  |   | |
  | ▲ | runarberg 9 hours ago | parent [-] |   | In my go example we have a human and an AI model competing at the same task. A good AI model will perform much much much better and probably win the game, but if we measure the energy input into either player the AI model will consume a lot more energy. However a game of go is not automation, it won’t save us any time. The benefits of the AI model is it helps human go players improve their own game, finding new moves, new patterns, new proverbs, etc. Because of go playing AI models human go players now play their games better, but nor more efficiently, nor faster. In your LLM coding example you have a human and an AI model collaborating on a single task, both spend some amount of energy (taking your assumptions at face value, compatible amount of energy) and produce a single outcome. In the go example it is easy to compare energy usage and the quality of the outcome is also easy to measure (simply who won the game). In your coding example the quality of the outcome is impossible to measure, and because the effort is collaborative, splitting the energy usage is complected. When talking about automation my game of go example falls apart. A much better examples would be something like a loom, or a digital calculator. These tools help the human arrive at a particular outcome much faster and with much less effort then a human performing the task without the help of the machines. The time saved by using these tools are measured in several orders of magnitudes, and the energy spent is at par with a human. It is easy to see how a loom or a digital calculator are more efficient then a human. I guess if we take into account the training cost of an LLM model we should also take into account the production costs of looms and digital calculators. I don‘t know how to do that, but I can’t imagine it would be anywhere close to that of an LLM model. And we have an LLM model we have increased the productivity of, not 5000x[1], but by 5%-30%. To me this does not sound like a revolutionary technology. But I have my doubts of even the 5%-30% figure. We have preliminary research ranging anywhere from negative productivity increase to your cited 5%-30%. We will have to wait for more research, and possibly some meta-analysis before we can accurately assess the productivity boost of LLMs. But we will have to do a whole lot better then 5%-30% to sufficiently justify the huge energy consumption of AI[2]. Personally, I am not convinced by your back of the envelope calculations. It fails my sniff test that 9 Wh of matrix multiplication will consistently save you an hour of using your brain to perform the same task adequately. I know our brains are not super good at the logic required for coding (but neither are LLMs), but I know for a fact they are very efficient at it. That said I refuse to accept your framing that we can simply ignore the energy used in training, on the bases that it is equally invalid as considering the energy used for evolving into our species, or that we can simply stop training new models and use the models we do have. That is simply not how things work. New models will get trained (unless the AI bubble bursts and the market looses interest) and the energy consumed by training is the bulk of the energy cost. And omitting it makes the case for AI comically easy to justify. I reject this framing. Instead of calculating, instead I’m gonna do a thought experiment. Imagine a late 19th century where iron and steel production took an entire 2% of world’s energy consumption[3] (maybe an alternative reality where Iron working is simply that challenging and requires much higher temperatures to work). But the steam train could only carry the same load as a 20 mule team, and would only do it 5%-30% faster on average then the state of the art cargo carriages at the time without steam power. Would you accept the argument that we should simply ignore the fact that rail production takes a whopping 2% of global energy consumption, when factoring the energy consumption of the steam train, even when it only provides you with 5%-30% productivity boost. I don‘t think so. --- 1: I don‘t know how much the loom has increased productivity, but this is what I would guess without any way of knowing how to even find out. 2: That is, if you are only interested in the increased productivity. If you are interested in the LLM models for some other reason, those reason will have to be measured differently. 3: https://www.allaboutai.com/resources/ai-statistics/ai-enviro...  |  
  |  |
  | ▲ | ethmarks 2 days ago | parent | prev [-] |   | That's a great point, and I think I was being vague before. To clarify, I was making a broad statement about automation in general. Running an automated loom is more efficient in every way that getting humans to weave cloth by hand. For most tasks, automation is more efficient. However, there are tasks that humans can still do more efficiently than our current engines of automation. Go is a good example because humans are really good at it and it AlphaGo can only sometimes beat the top players despite massive training and inference costs. On the other hand, I would dispute that LLMs fall into this category, at least for most tasks, because we have to factor in marginal setup costs too. I think that raising from infancy all of the humans needed to match the output speed of an LLM has a greater cost than training the LLM. Even if you include the cost of mining the metal and powering the factories necessary to build the machines that the LLMs run on. I'm not 100% confident in this statement, but I do think that it's much closer than you seem to think. Supporting the systems that support the systems that support humans takes a lot of resources. To use your blueberries example, while the cost of keeping the blueberries cold isn't much, growing a single serving of blueberries requires around 95 liters of water[1]. In a similar vein, the efficiency of the human brain is almost irrelevant because the 20 watts of energy consumed by the brain is akin from a resource consumption perspective to the electricity consumed by the monitor to read out the LLM's output: it's the last step in the process, but without the resource-guzzling system behind it, it doesn't work. Just as the monitor doesn't work without the data center which doesn't work without electricity, your brain doesn't work without your body which doesn't work without food which doesn't get produced without water. As sramam mentioned, these kinds of utilitarian calculations tend to seem pretty inhuman. However, most of the time, the calculations turn out in favor of automation. If they didn't, companies wouldn't be paying for automated systems (this logic doesn't apply to hype-based markets like AI. I'm talking more about markets that are stably automated like textile manufacturing). If you want an anti-automation argument, you'll have a better time arguing based on ethics instead of efficiency. Again, thanks for the Go example. I genuinely didn't consider the tasks where humans are more efficient than automation. [1]: https://watercalculator.org/water-footprint-of-food-guide/  |   | |
  | ▲ | runarberg a day ago | parent [-] |   | I‘m not convinced this exercise in what to and what not to include in this cost-benefit-analysis will lead to anything. We can always arbitrarily include an extra item to include to shift the calculations in our favor. For example I could simply add the cost of creating the data which is fed into the training set of an LLM, that creation is done by our human biological machinery and hence has the cost of the frozen blueberries, the rigid fiber insulations, the machinery that dug the waterpipe for their shower, etc. Instead I would like to shift the focus on the benefits of LLM. I know the costs are high, very very very high, but you seem to think that the benefits are also so high measured in time saved. That is the amount of tasks automated are enough to save humans doing similar tasks by miles. If that is what you think I disagree. LLMs have yet to prove them selves with real world application. We are seeing when we actually do measure how much LLMs save work-hours, that it the effects are at best negligible (see e.g. https://news.ycombinator.com/item?id=44522772). Worse, generative AI is disrupting our systems in worse way, where e.g. teachers, peer-reviewers, etc. have to put in a bunch of extra work to verify that the submitted work was actually written by that person, and not simply generated by AI. Just last Friday I read that arXiv will no longer accept submissions unless they have been previously peer-reviewed because they are overwhelmed by AI generated submissions[1]. There are definitely technologies which have saved us time and created a much more efficient system then was previously possible. The loom is a great example of one, I would claim the railway is another, and even the digital calculator for sure. But LLMs, and generative AI more generally are not that. There may be utilities for this technology, but automation and energy/work savings is not one of them. 1: https://blog.arxiv.org/2025/10/31/attention-authors-updated-...  |   | |
  | ▲ | ethmarks a day ago | parent [-] |   | You've convinced me. I did not consider the human cost of producing training data, I did not consider whether or not LLMs were actually saving effort, and I did not consider the extra effort to verify LLM output. I have nothing more to add other than to thank you for taking the time to write such a persuasive and high-quality reply. The internet would be a better place if there were more people like you on it. Thank you for making me less wrong.  |  
  |  
  |  
  |  
  |  |
  | ▲ | estimator7292 2 days ago | parent | prev | next [-] |   | Only slightly joking, but someone needs to put environmental caps on software updates. Just imagine how much energy it takes for each and every discord user to download and install a 100MB update... three times a week. Multiply that by dozens or hundreds of self-updating programs on a typical machine. Absolutely insane amounts of resources.  |  |
  | ▲ | EagnaIonat 2 days ago | parent | prev [-] |   | Goodhart’s Law will mess all that up for you.  |  
  |