| > 2. LLMs are far, far more efficient than humans in terms of resource consumption for a given task: https://www.nature.com/articles/s41598-024-76682-6 and https://cacm.acm.org/blogcacm/the-energy-footprint-of-humans... I want to push back on this argument, as it seems suspect given that none of these tools are creating profit, and so require funds / resources that are essentially coming from the combined efforts of much of the economy. I.e. the energy externalities here are monstrous and never factored into these things, even though these models could never have gotten off the ground if not for the massive energy expenditures that were (and continue to be) needed to sustain the funding for these things. To simplify, LLMs haven't clearly created the value they have promised, but have eaten up massive amounts of capital / value produced by everyone else. But producing that capital had energy costs too. Whether or not all this AI stuff ends up being more energy efficient than people needs to be measured on whether AI actually delivers on its promises and recoups the investments. EDIT: I.e. it is wildly unclear at this point that if we all pivot to AI that, economy-wide, we will produce value at a lower energy cost, and, even if we grant that this will eventually happen, it is not clear how long that will take. And sure, humans have these costs too, but humans have a sort of guaranteed potential future value, whereas the value of AI is speculative. So comparing energy costs of the two at this frozen moment in time just doesn't quite feel right to me. |
| |
| ▲ | keeda 3 days ago | parent | next [-] | | These tools may not be turning a profit yet, but as many point out, this is simply due to deeply subsidized free usage to capture market share and discover new use cases. However, their economic potential is undeniable. Just taking the examples in TFA and this sub-thread, the author was able to create economic value by automating rote aspects of his wife's business and stop paying for existing subscriptions to other apps. TFA doesn't mention what he paid for these tokens, but over the lifetime of his apps I'd bet he captures way more value than the tokens would have cost him. As for the energy externalities, the ACM article puts some numbers on them. While acknowledging that this is an apples/oranges comparison, it points out that the training cost for GPT-3 (article is from mid-2024) is about 5x the cost of raising a human to adulthood. Even if you 10x that for GPT-5, that is still only the cost of raising 50 humans to adulthood in exchange for a model that encapsulates a huge chunk of the world's knowledge, which can then be scaled out to an infinite number of tasks, each consuming a tiny fraction of the resources of a human equivalent. As such, even accounting for training costs, these models are far more efficient than humans for the tasks they do. | | |
| ▲ | nikisil80 2 days ago | parent [-] | | I appreciate your responses to my comments, including the addition of reading material. However, I'm going to have to push back on both points. Firstly, saying that because AI water use is on par with other industries, then we shouldn't scrutinize AI water use is a bit short-sighted. If the future Altman et al want comes to be, the shear scale of deployment of AI-focused data centers will lead to nominal water use orders of magnitude larger than other industries. Of course, on a relative scale, they can be seen as 'efficient', but even something efficient, when built out to massive scale, can suck out all of our resources. It's not AI's fault that water is a limited resource on Earth; AI is not the first industry to use a ton of water; however, eventually, with all other industries + AI combined (again, imagining the future the AI Kings want), we are definitely going 300km/h on the road to worldwide water scarcity. We are currently at a time where we need to seriously rethink our relationship with water as a society - not at a time where we can spawn whole new, extremely consumptive industries (even if, in relative terms, they're on par with what we've been doing (which isn't saying much given the state of the climate)) whose upsides are still fairly debatable and not at all proven beyond a doubt. As for the second link, there's a pretty easy rebuke to the idea, which aligns with the other reply to your link. Sure, LLMs are more energy-efficient at generating text than human beings, but do LLMs actually create new ideas? Write new things? Any text written by an LLM will be based off of someone else's work. There is a cost to creativity - to giving birth to actual ideas - that LLMs will never be able to incur, which makes them seem more efficient, but in the end they're more efficient at (once again) tasks which us humans have provided them with plenty of examples of (like writing corporate emails! Or fairly cookie-cutter code!) but at some point the value creation is limited. I know you disagree with me, it's ok - you are in the majority and you can feel good about that. I honestly hope the future you foresee where LLMs solve our problems and become important building blocks to our society comes to fruition (rather than the financialized speculation tools they currently are, let's be real). If that happens, I'll be glad I was wrong. I just don't see it happening. | | |
| ▲ | keeda 2 days ago | parent [-] | | These are important conversations to have because there is so much hyperbole in both directions that a lot of people end up having strong but misguided opinions. I think it's very helpful to consider the impact of LLMs in context (heheh) of the bigger picture rather than in isolation, because suddenly a lot of things fall into perspective. For instance, all water use by data centers is a fraction of the water used by golf courses! If it really does comes down to the wire for conserving water, I think humanity has the option of foregoing a leisure activity for the relatively wealthy in exchange for accelerated productivity for the rest of the world. And totally, LLMs might not be able to come up with new ideas, but they can super-charge the humans who do have ideas and want to develop them! An idea that would have taken months to be explored and developed can now be done in days. And given that like the majority of ideas fail, we would be failing that much faster too! In either case, just eyeballing the numbers we have currently, on average the resources a human without AI assistance would have consumed to conclude an endeavor far outweighs the resources consumed by both that human and an assisting LLM. I would agree that there will likely be significant problems caused by widespread adoption of AI, but at this point I think they would social (e.g. significant job displacement, even more wealth inequality) rather than environmental. |
|
| |
| ▲ | ben_w 9 hours ago | parent | prev [-] | | > I want to push back on this argument, as it seems suspect given that none of these tools are creating profit, and so require funds / resources that are essentially coming from the combined efforts of much of the economy. I.e. the energy externalities here are monstrous and never factored into these things, even though these models could never have gotten off the ground if not for the massive energy expenditures that were (and continue to be) needed to sustain the funding for these things. While it is absolutely possible, even plausible, that the economics of these models and providers is the next economic crash in waiting, somewhere between Enron (at worst, if they're knowingly cooking books) or Global Financial Crisis (if they're self-delusional rather than actively dishonest), we do have open-weights models that get hosted for money, that people play with locally if they're rich enough for the beefy machines, and that are not too far behind the SOTA as to suggest a difference in kind. This all strongly suggests that the resource consumption per token by e.g. Claude Code would be reasonably close to the list price if they weren't all doing a Red Queen race[0], running as hard as they can just to retain relevant against each other's progress, in an all-pay auction[1] where only the best can ever hope to cash anything out and even that may never be enough to cover the spend. Thing is, automation has basically always done this. It's more of a question of "what tasks can automation actually do well enough to bother with?" rather than "when it can, is it more energy efficient than a human?" A Raspberry Pi Zero can do basic arithmetic faster than the sum total performance of all 8 billion living humans, even if all the humans had trained hard and reached the level of the current world record holder, for a tenth of the power consumption of just one of those human's brains, or 2% of their whole body. But that's just arithmetic. Stable Diffusion 1.5 had a similar thing, when it came out the energy cost to make a picture on my laptop was comparable with the calories consumed while typing in a prompt for it… but who cares, SD 1.5 had all that Cronenberg anatomy, what matters is when the AI is "good enough" for the tasks against which it is set. To the extent that Claude Code can replace a human, and the speed at which it operates… Well, my experiments just before Christmas (which are limited, and IMO flawed in a way likely to overstate the current quality of the AI) say the speed of the $20 plan is about 10 sprints per calendar month, while the quality is now at the level of a junior with 1-3 years experience who is just about to stop being a junior. This means the energy cost per unit of work done is comparable with the energy cost needed to have that developer keep a computer and monitor switched on long enough to do the same unit of work. The developer's own body adds another 100-120 watts to that from biology, even if they're a free-range hippie communist who doesn't believe in money, cooked food, lightbulbs, nor having a computer or refrigerator at home, and who commutes by foot from a yurt with neither AC nor heating, ditto the office. Where the AI isn't good enough to replace a human, (playing Pokemon and managing businesses?) it's essentially infinitely more expensive (kWh or $) to use the AI. Still, this does leave a similar argument as with aircraft: really efficient per passenger-kilometre, but they enable so many more passenger-kilometres than before as to still sum to a relevant problem. [0] https://en.wikipedia.org/wiki/Red_Queen%27s_race [1] https://en.wikipedia.org/wiki/All-pay_auction |
|