Remix.run Logo
trollbridge 16 hours ago

A typical data centre is $2,500 per year per kW load (including overhead, hvac and so on).

If it costs $800,000 to replace the whole rack, then that would pay off in a year if it reduces 320 kW of consumption. Back when we ran servers, we wouldn't assume 100% utilisation but AI workloads do do that; normal server loads would be 10kW per rack and AI is closer to 100. So yeah, it's not hard to imagine power savings of 3.2 racks being worth it.

Octoth0rpe 15 hours ago | parent [-]

Thanks for the numbers! Isn't it more likely that the amount of power/heat generated per rack will stay constant over each upgrade cycle, and the upgrade simply unlocks a higher amount of service revenue per rack?

PunchyHamster 11 hours ago | parent | next [-]

Not in the last few years. CPUs went from ~200W TDP to 500W.

And they went from zero to multiple GPUs per server. Tho we might hit "the chips can't be bigger and the cooling can't get much better" point there.

The usage would be similar if it was say a rack filled with servers full of bulk storage (hard drives generally keep the power usage similar while growing storage).

But CPU/GPU wise, it's just bigger chips/more chiplets, more power.

I'd imagine any flattening might be purely because "we have DC now, re-building cooling for next gen doesn't make sense so we will just build servers with similar power usage as previously", but given how fast AI pushed the development it might not happen for a while.

toast0 10 hours ago | parent | prev [-]

> Isn't it more likely that the amount of power/heat generated per rack will stay constant over each upgrade cycle,

Power density seems to grow each cycle. But eventually your DC hits power capacity limits, and you have to leave racks empty because there's no power budget.