▲ | thrtythreeforty 4 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Safely in "millions of devices." The exact number depends on assumptions you make regarding all the supporting stuff, because typically the accelerators consume only a fraction of total power requirement. Even so, millions. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | cj 4 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
"GPUs per user" would be an interesting metric. (Quick, inaccurate googling) says there will be "well over 1 million GPUs" by end of the year. With ~800 million users, that's 1 NVIDIA GPU per 800 people. If you estimate people are actively using ChatGPT 5% of the day (1.2 hours a day), you could say there's 1 GPU per 40 people in active use. Assuming consistent and even usage patterns. That back of the envelope math isn't accurate, but interesting in the context of understanding just how much compute ChatGPT requires to operate. Edit: I asked ChatGPT how many GPUs per user, and it spit out a bunch of calculations that estimates 1 GPU per ~3 concurrent users. Would love to see a more thorough/accurate break down. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|