▲ | neko_ranger 7 days ago | |||||||||||||
Four H100 in a 2U rack didn't sound impressive, but that is accurate: >A typical 1U or 2U server can accommodate 2-4 H100 PCIe GPUs, depending on the chassis design. >In a 42U rack with 20x 2U servers (allowing space for switches and PDU), you could fit approximately 40-80 H100 PCIe GPUs. | ||||||||||||||
▲ | michaelt 7 days ago | parent | next [-] | |||||||||||||
Why stop at 80 H100s for a mere 6.4 terabytes of GPU memory? Supermicro will sell you a full rack loaded with servers [1] providing 13.4 TB of GPU memory. And with 132kW of power output, you can heat an olympic-sized swimming pool by 1°C every day with that rack alone. That's almost as much power consumption as 10 mid-sized cars cruising at 50 mph. [1] https://www.supermicro.com/en/products/system/gpu/48u/srs-gb... | ||||||||||||||
| ||||||||||||||
▲ | jzymbaluk 7 days ago | parent | prev [-] | |||||||||||||
And the big hyperscaler cloud providers are building city-block sized data centers stuffed to the gills with these racks as far as the eye can see |