| ▲ | FridgeSeal 2 days ago | |
That’s great for AI people, but can we use this for other distributed workloads that aren’t ML? | ||
| ▲ | geerlingguy 2 days ago | parent | next [-] | |
I've been testing HPL and mpirun a little, not yet with this new RDMA capability (it seems like Ring is currently the supported method)... but it was a little rough around the edges. See: https://ml-explore.github.io/mlx/build/html/usage/distribute... | ||
| ▲ | dagmx 2 days ago | parent | prev [-] | |
Sure, there’s nothing about it that’s tied to ML. It’s faster interconnect , use it for many kinds of shared compute scenarios. | ||