| ▲ | ddtaylor 4 days ago |
| For someone who doesn't know what a gigawat worth of Nvidia systems is, how many high-end H100 or whatever does this get you? My estimates along with some poor-grade GPT research leads me to think it could be nearly 10 million? That does seem insane. |
|
| ▲ | kingstnap 4 days ago | parent | next [-] |
| It's a ridiculous amount claimed for sure. If its 2 kW per it's around 5 million, and 1 to 2 kW is definitely the right ballpark at a system level. The NVL72 is 72 chips is 120 kW total for the rack. If you throw in ~25 kW for cooling its pretty much exactly 2 kW each. |
| |
| ▲ | fuzzfactor 4 days ago | parent [-] | | So each 2KW component is like a top-shelf space heater which the smart money never did want to run unless it was quite cold outside. | | |
|
|
| ▲ | thrtythreeforty 4 days ago | parent | prev | next [-] |
| Safely in "millions of devices." The exact number depends on assumptions you make regarding all the supporting stuff, because typically the accelerators consume only a fraction of total power requirement. Even so, millions. |
| |
| ▲ | cj 4 days ago | parent [-] | | "GPUs per user" would be an interesting metric. (Quick, inaccurate googling) says there will be "well over 1 million GPUs" by end of the year. With ~800 million users, that's 1 NVIDIA GPU per 800 people. If you estimate people are actively using ChatGPT 5% of the day (1.2 hours a day), you could say there's 1 GPU per 40 people in active use. Assuming consistent and even usage patterns. That back of the envelope math isn't accurate, but interesting in the context of understanding just how much compute ChatGPT requires to operate. Edit: I asked ChatGPT how many GPUs per user, and it spit out a bunch of calculations that estimates 1 GPU per ~3 concurrent users. Would love to see a more thorough/accurate break down. | | |
| ▲ | coder543 4 days ago | parent | next [-] | | A lot of GPUs are allocated for training and research, so dividing the total number by the number of users isn’t particularly useful. Doubly so if you’re trying to account for concurrency. | |
| ▲ | NooneAtAll3 4 days ago | parent | prev [-] | | I'm kinda scared of "1.2 hours a day of ai use"... | | |
| ▲ | Rudybega 4 days ago | parent [-] | | Sorry, those figures are skewed by Timelord Georg, who has been using AI for 100 million hours a day, is an outlier, and should have been removed. | | |
| ▲ | fuzzfactor 4 days ago | parent [-] | | Roger, but I still think with that much energy at its disposal, if AI performs as desired it will work it's way up to using each person more than 1.2 hours per day, without them even knowing about it :\ | | |
| ▲ | Nevermark 4 days ago | parent [-] | | When GPUs share people concurrently, they collectively get much more than 24 hours of person per day. | | |
| ▲ | fuzzfactor 4 days ago | parent [-] | | You're right! With that kind of singularity the man-month will no longer be mythical ;) | | |
|
|
|
|
|
|
|
| ▲ | sandworm101 4 days ago | parent | prev | next [-] |
| At this scale, I would suggest that these numbers are for the entire data center rather than a sum of the processor demands. Also the "infrastructure partnership " language suggest more than just compute. So I would add cooling into the equation, which could be as much a half the power load, or more depending on where they intend to locate these datacenters. |
|
| ▲ | skhameneh 4 days ago | parent | prev | next [-] |
| Before reading your comment I did some napkin math using 600W per GPU:
10,000,000,000 / 600 = 16,666,666.66... With varying consumption/TDP, could be significantly more, could be significantly less, but at least it gives a starting figure. This doesn't account for overhead like energy losses, burst/nominal/sustained, system overhead, and heat removal. |
| |
|
| ▲ | alphabetag675 4 days ago | parent | prev | next [-] |
| Account for around 3MW for every 1000 GPUs. So, 10GW is around 333 * 10 * 3MW so 3.33 * 1k * 1k GPUs, so around 3.33 M GPUs |
|
| ▲ | ProofHouse 4 days ago | parent | prev | next [-] |
| How much cable (and what kind) to connect them all? That number would be 100x the number of gpus. I would think they just clip on metal racks no cables but then I saw the xai data center that can blue wire cables everywhere |
| |
| ▲ | hbarka 4 days ago | parent [-] | | It was announced last week that Nvidia acquired-hired a company that can connect more than 100,000 GPUs together as a cluster that can effectively serve as a single integrated system. | | |
|
|
| ▲ | iamgopal 4 days ago | parent | prev | next [-] |
| and How much is that in terms of percentage of bitcoin network capacity ? |
| |
| ▲ | mrb 4 days ago | parent | next [-] | | Bitcoin mining consumes about 25 GW: https://ccaf.io/cbnsi/cbeci so this single deal amounts to about 40% of that. To be clear, I am comparing power consumption only. In terms of mining power, all these GPUs could only mine a negligible fraction of what all specialized Bitcoin ASIC mine. Edit: some math I did out of sheer curiosity: a modern top-of-the-line GPU would mine BTC at about 10 Ghash/s (I don't think anyone tried but I wrote GPU mining software back in the day, and that is my estimate). Nvidia is on track to sell 50 million GPUs in 2025. If they were all mining, their combined compute power would be 500 Phash/s, which is 0.05% of Bitcoin's global mining capacity. | |
| ▲ | cedws 4 days ago | parent | prev [-] | | I'm also wondering what kind of threat this could be to PoW blockchains. | | |
| ▲ | typpilol 4 days ago | parent [-] | | Literally none at all because asic | | |
| ▲ | fuzzfactor 4 days ago | parent | next [-] | | What happens if AI doesn't pay off before the GPUs wear out or are in need of replacement? So at that point a DC replaces them all with ASICs instead? Or if they just feel like doing that any time. | |
| ▲ | cedws 3 days ago | parent | prev [-] | | Some chains are designed to be ASIC resistant. |
|
|
|
|
| ▲ | az226 3 days ago | parent | prev | next [-] |
| Vera Rubin will be about 2.5kw and Feynman will be about 4kw. All-in, you’re looking at a higher footprint maybe 4-5kw per GPU blended. So about 2 million GPUs. |
|
| ▲ | 4 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | 4 days ago | parent | prev | next [-] |
| [deleted] |
|
| ▲ | awertjlkjl 4 days ago | parent | prev [-] |
| You could think of it as "as much power as is used by NYC and Chicago combined". Which is fucking insanely wasteful. |
| |
| ▲ | onlyrealcuzzo 4 days ago | parent | next [-] | | I dunno. Google is pretty useful. It uses >15 TWh per year. Theoretically, AI could be more useful than that. Theoretically, in the future, it could be the same amount of useful (or much more) with substantially less power usage. It could be a short-term crunch to pull-forward (slightly) AI advancements. Additionally, I'm extremely skeptical they'll actually turn on this many chips using that much energy globally in a reasonable time-frame. Saying that you're going to make that kind of investment is one thing. Actually getting the power for it is easier said than done. VC "valuations" are already a joke. They're more like minimum valuations. If OpenAI is worth anywhere near it's current "valuations", Nvidia would be criminally negligent NOT to invest at a 90% discount (the marginal profit on their chips). | | |
| ▲ | dns_snek 4 days ago | parent | next [-] | | According to Google's latest environmental report[1] that number was 30 TWh per year in 2024, but as far as I can tell that's their total consumption of their datacenters, which would include everything from Google Search, to Gmail, Youtube, to every Google Cloud customer. Is it broken down by product somewhere? 30 TWh per year is equivalent to an average power consumption of 3.4 GW for everything Google does. This partnership is 3x more energy intensive. Ultimately the difference in `real value/MWh` between these two must be many orders of magnitude. [1] https://sustainability.google/reports/google-2025-environmen... | | |
| ▲ | onlyrealcuzzo 3 days ago | parent [-] | | Data centers typically use 60% (or less) on average of their max rating. You over-provision so that you (almost) always have enough compute to meet your customers needs (even at planet scale, your demand is bursty), you're always doing maintenance on some section, spinning up new hardware and turning down old hardware. So, apples to apples, this would likely not even be 2x at 30TWh for Google. |
| |
| ▲ | tmiku 4 days ago | parent | prev | next [-] | | For other readers: "15 Twh per year" is equivalent to 1.71 GW, 17.1% of the "10GW" number used to describe the deal. | | |
| ▲ | mNovak 4 days ago | parent [-] | | This is ignoring the utilization factor though. Both Google and OpenAI have to overprovision servers for the worst case simultaneous users. So 1.71 GW average doesn't tell use the maximum instantaneous GW capacity of Google -- if we pull a 4x out of the hat (i.e. peak usage is 4x above average), it becomes ~7 GW of available compute. More than a "Google" of new compute is of course still a lot, but it's not many Googles' worth. |
| |
| ▲ | Capricorn2481 4 days ago | parent | prev | next [-] | | Does Google not include AI? | |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | 4 days ago | parent | prev | next [-] | | [deleted] | |
| ▲ | jazzyjackson 4 days ago | parent | prev | next [-] | | I mean if 10GW of GPUs gets us AGI and we cure cancer than that's cool, but I do get the feeling we're just getting uncannier chatbots and fully automated tiktok influencers | | |
| ▲ | yard2010 4 days ago | parent | next [-] | | Current llms are just like farms. Instead of tomatoes by the pound you buy tokens by the pound. So it depends on the customers. | |
| ▲ | junon 4 days ago | parent | prev | next [-] | | This is also my take. I think a lot of people miss the trees for the forest (intentionally backward). AI that could find a cure for cancer isn't the driving economic factor in LLM expansion, I don't think. I doubt cancer researchers are holding their breath on this. | |
| ▲ | rebolek 4 days ago | parent | prev [-] | | And when it’s built, Sam Altman will say: We are so close, if we get 10TW, AGI will be here next year! |
| |
| ▲ | diego_sandoval 4 days ago | parent | prev [-] | | Do you think the existence of NYC and Chicago is insanely wasteful? |
|