| ▲ | pdpi 16 hours ago |
| Part of what bothers me with AI energy consumption isn't just how wasteful it might be from an ecological perspective, it's how brutally inefficient it is compared to the biological "state of the art" — 2000kcal = 8,368 kJ. 8,368 kJ / 86,400 s = 96.9 W. So the benchmark is achieving human-like intelligence on a 100W budget. I'd be very curious to see what can be achieved by AI targeting that power budget. |
|
| ▲ | crazygringo 16 hours ago | parent | next [-] |
| Is it though? When I ask an LLM research questions, it often answers in 20 seconds what it would take me an entire afternoon to figure out with traditional research. Similarly, I've had times where it wrote me scientific simulation code that would take me 2 days, in around a minute. Obviously I'm cherry-picking the best examples, but I would guess that overall, the energy usage my LLM queries have required is vastly less than my own biological energy usage if I did the equivalent work on my own. Plus it's not just the energy to run my body -- it's the energy to house me, heat my home, transport my groceries, and so forth. People have way more energy needs than just the kilocalories that fuel them. If you're using AI productively, I assume it's already much more energy-efficient than the energy footprint of a human for the same amount of work. |
| |
| ▲ | usefulcat 15 hours ago | parent | next [-] | | > it often answers in 20 seconds what it would take me an entire afternoon to figure out with traditional research. In that case I think it would be only fair to also count the energy required for training the LLM. LLMs are far ahead of humans in terms of the sheer amount of knowledge they can remember, but nowhere close in terms of general intelligence. | | |
| ▲ | crazygringo 14 hours ago | parent [-] | | Training energy is amortized across the lifespan of a model. For any given query for the most popular commercial models, your share of the energy used to train it is a small fraction of the energy used for inference (e.g. 10%). |
| |
| ▲ | discreteevent 15 hours ago | parent | prev [-] | | For this kind of thinking to work in practice you would need to kill the people that AI makes redundant. This is apart from the fact that right now we are at a choke point where it's much more important to generate less CO2 than it is to write scientific simulation code a little quicker (and most people are using AI for much more unnecessary stuff like marketing) | | |
| ▲ | crazygringo 14 hours ago | parent [-] | | > For this kind of thinking to work in practice you would need to kill the people that AI makes redundant. That is certainly not a logical leap I'm making. AI doesn't make anybody redundant, the same way mechanized farming didn't. It just frees them up to do more productive things. Now consider whether LLM's will ultimately speed up the technological advancements necessary to reduce CO2? It's certainly plausible. | | |
| ▲ | throwaway-11-1 8 hours ago | parent [-] | | Honest question - what are artists being freed up to do that’s more important? DoorDash? | | |
| ▲ | crazygringo 7 hours ago | parent [-] | | Honest answer - making more art. Think about how much cloud computing and open sourced changed it so you could launch a startup with 3 engineers instead of 20. What happened? An explosion of startups, since there were so many more engineers to go around. The engineers weren't delivering pizzas instead. Same thing is happening with anything that needs more art -- the potential for video games here is extraordinary. A trained artist is way more effective leveraging AI and handling 10x the output, as the tools mature. Now you get 10x more video games, or 10x more complex/larger worlds, or whatever it is that the market ends up wanting. |
|
|
|
|
|
| ▲ | exitb 16 hours ago | parent | prev | next [-] |
| How so? A human needs the entire civilisation to be productive at that level. If you take a just the entire US electricity consumption and divide it by its population, you'll get a result that's an order of magnitude higher. And that's just electricity. And that's just domestic consumption, even though US Americans consume tons of foreign-made goods. |
|
| ▲ | rixed 10 hours ago | parent | prev | next [-] |
| Ah! And don't get me started about how specific its energy source must be! Pure electricity, no less! Where a human brain comes attached with an engine that can power it for days on a mere ham sandwich! |
|
| ▲ | Magnets 8 hours ago | parent | prev | next [-] |
| you didn't consider the 18+ years we have with almost no productivity and the extra resources required to sustain life |
|
| ▲ | roflmaostc 16 hours ago | parent | prev | next [-] |
| try to calculate 12312312.123213 * 123123.3123123 A computer uses orders of magnitude less energy than a human. It's all about the task, humans are specialized too. EDIT: maybe add a logarithm or other non-linear functions to make the gap even bigger. |
| |
|
| ▲ | redox99 9 hours ago | parent | prev | next [-] |
| How much energy did evolution "spend" to get us here? I agree human brains are crazy efficient though. |
|
| ▲ | saagarjha 16 hours ago | parent | prev | next [-] |
| That’s about the energy a laptop or two uses at full tilt. |
|
| ▲ | FergusArgyll 16 hours ago | parent | prev | next [-] |
| You can't compare a training run that produces a file which can be run forever after to a human day |
| |
| ▲ | beepbooptheory 16 hours ago | parent [-] | | Inference itself is also very costly! But either way, how many human lives are spent making that file? | | |
| ▲ | dale_glass 16 hours ago | parent [-] | | Not really. I can generate images or get LLM answers in below 15 seconds on mundane hardware. The image generator draws many times faster than any normal person, and the LLM even on my consumer hardware still produces output faster than I can type (and I'm quite good at that), let alone think what to type. | | |
| ▲ | butlike 10 hours ago | parent | next [-] | | An LLM gives AN answer. If you ask for not many more than that it gets confused, but instead of acting in a human-like way, it confidently proceeds forward with incorrect answers. You never quite know when the context got poisoned, but reliability drops to 0. There's many things to say on this. Free is worthless. Speed is not necessarily a good thing. The image generation is drivel. But... The main nail in the coffin is accountability. I can't trust my work if I can't trust the output of the machine. (and as a bonus, the machine can't build a house. It's single purpose). | | |
| ▲ | Dylan16807 9 hours ago | parent [-] | | Okay, but this has vanishingly little to do with the comment chain you replied to, which was about energy efficiency. |
| |
| ▲ | beepbooptheory 15 hours ago | parent | prev [-] | | Is "faster" really what we are talking about right now? It could be a lot faster to take a helicopter to work everyday too, versus riding a bike. Also, why are people moving mountains to make huge, power obliterating datacenters if actually "its fine, its not that much"? | | |
| ▲ | dale_glass 15 hours ago | parent | next [-] | | Speed highly correlates with power efficiency. I believe my hardware maxes out somewhere around 150W. 15 seconds of that isn't much at all. > Also, why are people moving mountains to make huge, power obliterating datacenters if actually "its fine, its not that much"? I presume that's mostly training, not inference. But in general anything that serves millions of requests in a small footprint is going to look pretty big. | |
| ▲ | discreteevent 15 hours ago | parent | prev | next [-] | | > It could be a lot faster to take a helicopter to work everyday too, versus riding a bike. Great analogy. | | |
| ▲ | Dylan16807 9 hours ago | parent [-] | | It's not a good analogy at all, because of what they said about mundane hardware. They're specifically not talking about any kind of ridiculous wattage situation, they're talking about single GPUs that need fewer watts than a human in an office to make text faster than a human, or that need 2-10x the watts to make video a thousand times faster. | | |
| ▲ | dale_glass 5 hours ago | parent [-] | | It's a Framework Desktop motherboard. I believe the CPU on that maxes out somewhere around 150W. |
|
| |
| ▲ | FergusArgyll 15 hours ago | parent | prev [-] | | There's a billion users. Why do we make massive cities and factories and fields if humans only need 2000 calories a day | | |
|
|
|
|
|
| ▲ | windexh8er 15 hours ago | parent | prev [-] |
| Beyond wasteful the linked article can't even remotely be taken seriously. > An AI cloud can generate revenue of $10-12 billion dollars per gigawatt, annually. What? I let ChatGPT swag an answer on the revenue forecast and it cited $2-6B rev per GW year. And then we get this gem... > Wärtsilä, historically a ship engine manufacturer, realized the same engines that power cruise ships can power large AI clusters. It has already signed 800MW of US datacenter contracts. So now we're going to be spewing ~486 g CO₂e per kWh using something that wasn't designed to run 24/7/365 to handle these workloads? These datacenters choosing to use these forms of power should have to secure a local vote showcasing, and being held to, annual measurements of NOx, CO, VOC and PM. This article just showcases all the horrible bandaids being applied to procure energy in any way possible with little regard to health or environmental impact. |
| |
| ▲ | Aurornis 14 hours ago | parent [-] | | > What? I let ChatGPT swag an answer on the revenue forecast and it cited $2-6B rev per GW year. This article is coming from one of the premier groups doing financial and technical analysis on the semiconductor industry and AI companies. I trust their numbers a hundred times more than a ChatGPT guess. | | |
| ▲ | windexh8er 14 hours ago | parent [-] | | Are you sure they don't have a vested interest? At least ChatGPT gave me sources. It doesn't matter who they are if there's nothing backing it up. The entire article is predicated on the fact that this is profitable long term. Again:
> An AI cloud can generate revenue of $10-12 billion dollars per gigawatt, annually. Yet this simple fact isn't justified at all nor is it stated what "AI cloud" actually is or how they got to those numbers. |
|
|