Remix.run Logo
anonzzzies 5 hours ago

Is that true? Because that's indeed FAR less than I thought. That would definitely make me worry a lot less about energy consumption (not that I would go and consume more but not feeling guilty I guess).

derekdahmer 2 hours ago | parent [-]

A H100 uses about 1000W including networking gear and can generate 80-150 t/s for a 70B model like llama.

So back of the napkin, for a decently sized 1000 token response you’re talking about 8s/3600s*1000 = 2wh which even in California is about $0.001 of electricity.

pshc 2 hours ago | parent [-]

With batched parallel requests this scales down further. Even a MacBook M3 on battery power can do inference quickly and efficiently. Large scale training is the power hog.