▲ | throwup238 9 hours ago | |||||||
The network bandwidth between nodes is a bigger limitation than compute. The newest Nvidia cards come with 400gbit busses now to communicate between them, even on a single motherboard. Compared to SETI or Folding @Home, this would work glacially slow for AI models. | ||||||||
▲ | fourthark 8 hours ago | parent [-] | |||||||
Seems like training would be a better match, where you need tons of compute but don’t care about latency. | ||||||||
|