Remix.run Logo
derekdahmer 2 hours ago

A H100 uses about 1000W including networking gear and can generate 80-150 t/s for a 70B model like llama.

So back of the napkin, for a decently sized 1000 token response you’re talking about 8s/3600s*1000 = 2wh which even in California is about $0.001 of electricity.

pshc 2 hours ago | parent [-]

With batched parallel requests this scales down further. Even a MacBook M3 on battery power can do inference quickly and efficiently. Large scale training is the power hog.