| ▲ | nine_k 5 hours ago |
| > If the "half a million tons" figure were accurate, a single 1 GW data center would consume 1.7% of the world's annual copper supply. If we built 30 GW of capacity—a reasonable projection for the AI build-out—that sector alone would theoretically absorb almost half of all the copper mined on Earth. Quickly doing such "back of an envelope" calculations, and calling out things that seem outlandish, could be a useful function of an AI assistant. |
|
| ▲ | 9dev 5 hours ago | parent | next [-] |
| Using your brain is so vastly more energy efficient, we might just only need half of that 30 GW capacity if fewer people had these leftpad-style knee-jerk reactions. |
| |
| ▲ | jasomill 22 minutes ago | parent | next [-] | | While I don't know how more or less efficient it is, WolframAlpha works well for these sorts of approximations, and shows its work more clearly than the AI chatbots I've used. | |
| ▲ | blueg3 5 hours ago | parent | prev | next [-] | | A Gemini query uses about a kilojoule. The brain runs at 20 W (though the whole human costs 100 W). So, the human is less energy if you can get it done in under 50 seconds. | | |
| ▲ | 9dev 5 hours ago | parent | next [-] | | Where does that number come from? Does it factor in the energy required to build the servers used to train the model? Does it factor in… the training? | | |
| ▲ | pixl97 4 hours ago | parent | next [-] | | There is no end to this energy comparison... For example does it factor in the 18-24 years needed to train a human and the energy used for that? | |
| ▲ | lostlogin 5 hours ago | parent | prev [-] | | Hopefully we aren’t doing too much AI training to work out 200 * 1000. If a computer is involved at all it’s disappointing, if AI is used, more so. | | |
| ▲ | 9dev 5 hours ago | parent [-] | | It doesn't matter what the model is actually doing at the end of the day, when training and hosting it involves massive amounts of energy. |
|
| |
| ▲ | lostlogin 5 hours ago | parent | prev [-] | | If humans aren’t more efficient the energy is still used, as they remain alive. Maybe the AI will notice this? |
| |
| ▲ | wongarsu 5 hours ago | parent | prev [-] | | Each person uses about 100W (2000kcal/24h=96W). Running all of humanity takes about 775GW. Sure, using or not using your brain is a negligible energy difference, so if you aren't using it you really should, for energy efficiency's sake. But I don't think the claim that our brains are more energy efficient is obviously true on its own. The issue is more about induced demand from having all this external "thinking" capacity on your fingertips | | |
| ▲ | program_whiz 5 hours ago | parent | next [-] | | Is there an AI system with functionality at or equal to a human brain that operates on less than 100W? Its currently the most efficient model we have. You compare all of humanity's energy expenditure, but to make the comparison, you need to consider the cost of replicating all that compute with AI (assuming we had an AGI at human level in all regards, or a set of AIs that when operated together could replace all human intelligence). | | |
| ▲ | pixl97 4 hours ago | parent | next [-] | | >all human intelligence So, this is rather complex because you can turn AI energy usage to nearly zero when not in use. Humans have this problem of needing to consume a large amount of resources for 18-24 years with very little useful output during that time, and have to be kept running 24/7 otherwise you lose your investment. And even then there is a lot of risk they are going to be gibbering idiots and represent a net loss of your resource expenditure. For this I have a modern Modest Proposal they we use young children as feed stock for biofuel generation before they become a resource sink. Not only do you save the child from a life of being a wage slave, you can now power your AI data center. I propose we call this the Matrix Efficiency Saving System (MESS). | |
| ▲ | Aerroon 3 hours ago | parent | prev | next [-] | | >Is there an AI system with functionality at or equal to a human brain that operates on less than 100W? Obviously not equal to a human brain, but my GPU takes about 150W and can draw an image in a minute that would take me forever to replicate. | |
| ▲ | tlb 5 hours ago | parent | prev | next [-] | | No one will ever agree on when AI systems have equivalent functionality to a human brain. But lots of jobs consist of things a computer can now do for less than 100W. Also, while a body itself uses only 100W, a normal urban lifestyle uses a few thousand watts for heat, light, cooking, and transportation. | | |
| ▲ | 9dev 5 hours ago | parent [-] | | > Also, while a body itself uses only 100W, a normal urban lifestyle uses a few thousand watts for heat, light, cooking, and transportation. Add to that the tier-n dependencies this urban lifestyle has—massive supply chains sprawling across the planet, for example involving thousands upon thousands of people and goods involved in making your morning coffee happen. | | |
| ▲ | wongarsu 4 hours ago | parent [-] | | Wikipedia quoted global primary energy production at 19.6 TW, or about 2400W/person. Which is obviously not even close to equally distributed. Per-country it gets complicated quickly, but naively taking the total from [1] brings the US to 9kW per person. And that's ignoring sources like food from agriculture, including the food we feed our food. To be fair, AI servers also use a lot more energy than their raw power demand if we use the same metrics. But after accounting for everything, an American and an 8xH100 server might end up in about the same ballpark Which is not meant as an argument for replacing Americans with AI servers, but it puts AI power demand into context https://www.eia.gov/energyexplained/us-energy-facts/ |
|
| |
| ▲ | wongarsu 5 hours ago | parent | prev [-] | | Obviously we don't have AGI so we can't compare many tasks. But on tasks where AI does perform at comparable levels (certain subsets of writing, greenfield coding and art) it performs fairly well. They use more power but are also much faster, and that about cancels out. There are plenty of studies that try to put numbers on the exact tradeoff, usually focused more on CO2. Plenty that find AI better by some absurd degree (800 times more efficient at 3d modelling, 130 to 1500 times more efficient at writing, or 300 to 3000 times more efficient at illustrating [1]). The one I'd trust the most is [2] where GPT4 was 5-19 times less CO2 efficient than humans at solving coding challenges 1: https://www.nature.com/articles/s41598-024-54271-x?fromPaywa... 2: https://www.nature.com/articles/s41598-025-24658-5 |
| |
| ▲ | Majromax 5 hours ago | parent | prev [-] | | I did some math for this particular case by asking Google’s Gemini Pro 3 (via AI studio) to evaluate the press release. Nvidia has since edited the release to remove the “tons of copper” claim, but it evaluated the other numbers at a reported API cost of about 3.8 cents. If the stated pricing just recovers energy cost, that implies 1500kJ of energy as a maximum (less if other costs are recovered in the pricing). A human thinking for 10 minutes would use sbout 6kJ of direct energy. I agree with your point about induced demand. The “win” wouldn’t be looking at a single press release with already-suspect numbers, but rather looking at essentially all press releases of note, a task not generally valuable enough to devote people towards. That being said, we normally consider it progress when we can use mechanical or electrical energy to replace or augment human work. |
|
|
|
| ▲ | Gravityloss 5 hours ago | parent | prev | next [-] |
| I think this is exactly the thing that should be done by a person without AI, to check what AI is writing. |
| |
| ▲ | brookst 5 hours ago | parent | next [-] | | That can be true while also seeing value in using an AI to sanity check human-generated claims. | |
| ▲ | franktankbank 5 hours ago | parent | prev [-] | | Nah just spin up an agentic ai proofreading agent on the cloud. |
|
|
| ▲ | CamperBob2 3 hours ago | parent | prev | next [-] |
| Nobody on HN is a bigger AI stan than I am -- well, maybe that SimonW guy, I guess -- but the truth is that problems involving unit conversions are among the riskiest things you can ask an LLM to handle for you. It's not hard to imagine why, as the embedding vectors for terms like pounds/kilograms and feet/yards/meters are not going to be far from each other. Extreme caution is called for. |
| |
| ▲ | zahlman 3 hours ago | parent [-] | | That sounds like the sort of thing I'd expect them to be good at. What goes wrong? | | |
| ▲ | CamperBob2 3 hours ago | parent [-] | | I edited the post with a speculation, but it's just a guess, really. In the training data, different units are going to share near-identical grammatical roles and positions in sentences. Unless some care is taken to force the embedding vectors for units like "pounds" and "kilograms" to point in different directions, their tokens may end up being sampled more or less interchangeably. Gas-law calculations were where I first encountered this bit of scariness. It was quite a while ago, and I imagine the behavior has been RLHF'ed or otherwise tweaked to be less of a problem by now. Still, worth watching out for. | | |
| ▲ | zahlman 3 hours ago | parent [-] | | > In the training data, different units are going to share near-identical grammatical roles and positions in sentences. Yes, but I would also expect the training data to include tons of examples of students doing unit-conversion homework, resources explaining the concept, etc. (So I would expect the embedding space to naturally include dimensions that represent some kind of metric-system-ness, because of data talking about the metric system.) And I understand the LLMs can somehow do arithmetic reasonably well (though it matters for some reason how big the numbers are, so presumably the internal logic is rather different from textbook algorithms), even without tool use. |
|
|
|
|
| ▲ | kergonath 5 hours ago | parent | prev [-] |
| It’s one of the useful functions of an engineer. |
| |
| ▲ | potato3732842 5 hours ago | parent [-] | | It's almost always the engineers, analysts and MBA spreadsheet pushers and other people removed from the physical consequences outputting these mistakes because it's way easier to not notice a misplaced decimal or incorrect value when you deal in pure numbers and know what they "should" be than you are the person actually figuring out how to make it happen the difference between needing 26666666.667 and 266666666.667 <units> of <widget> is pretty meaningful. Engineers don't output these mistakes as often as analysts or whatever because they work in organizations that invest more in catching them, not because they make them all that much less. Whether talking weight or bulk a decimal place is approximately the difference between needing a wheelbarrow, a truck, a semi truck, a freight train and a ship. | | |
| ▲ | kergonath 3 hours ago | parent [-] | | Around here, asking "does this number make sense?" when coming across a figure is second nature, reinforced since early in engineering school. The couple of engineers from the US that I know behave similarly, which makes sense because when your job is to solve practical problems and design stuff, precision matters. > difference between needing 26666666.667 and 266666666.667 <units> of <widget> is pretty meaningful To be fair, that’s why we’d use 2.6666666667e7 and 2.66666666667e8, which makes it easier to think about orders of magnitude. Processes, tools and methods must be adapted to reduce the risk of making a mistake. | | |
|
|