Remix.run Logo
bob1029 6 hours ago

> It might be 800Gbe but it's still so many hops, pcie is weighty.

Once you need to reach beyond L2/L3 it is often the case that perfectly viable experiments cannot be executed in reasonable timeframes anymore. The current machine learning paradigm isn't that latency sensitive, but there are other paradigms that can't be parallelized in the same way and are very sensitive to latency.