▲ | nickysielicki 5 days ago | ||||||||||||||||||||||||||||||||||
The calculation under “Quiz 2: GPU nodes“ is incorrect, to the best of my knowledge. There aren’t enough ports for each GPU and/or for each switch (less the crossbar connections) to fully realize the 450GB/s that’s theoretically possible, which is why 3.2TB/s of internode bandwidth is what’s offered on all of the major cloud providers and the reference systems. If it was 3.6TB/s, this would produce internode bottlenecks in any distributed ring workload. Shamelessly: I’m open to work if anyone is hiring. | |||||||||||||||||||||||||||||||||||
▲ | aschleck 5 days ago | parent [-] | ||||||||||||||||||||||||||||||||||
It's been a while since I thought about this but isn't the reason providers advertise only 3.2tbps because that's the limit of a single node's connection to the IB network? DGX is spec'ed to pair each H100 with a Connect-X 7 NIC and those cap out at 400gbps. 8 gpus * 400gbps / gpu = 3.2tbps. Quiz 2 is confusingly worded but is, iiuc, referring to intranode GPU connections rather than internode networking. | |||||||||||||||||||||||||||||||||||
|