| ▲ | tensor-fusion 11 hours ago | |||||||
There's a third option that might fit some of the "I'm on a Mac but need CUDA" cases: network-mounting an Nvidia GPU from another machine on the same LAN. The GPU stays wherever it lives (office server, lab machine, a roommate's PC), your Mac runs the CUDA workload locally without any code changes — same PyTorch/CUDA calls, just intercepted by a stub library that forwards them over the local network. The tradeoff vs. a physical eGPU: no Thunderbolt bandwidth ceiling or cabling, but you do need to be on the same LAN and there's ~4% overhead vs. native. Doesn't help if you need the GPU while traveling, and won't fix the physical macOS driver situation for native GPU access. Disclosure: I work on GPU Go (tensor-fusion.ai/products/gpu-go), so I'm obviously biased toward this approach — but it genuinely is a different point in the design space from eGPU. | ||||||||
| ▲ | bigyabai 11 hours ago | parent [-] | |||||||
> same PyTorch/CUDA calls, just intercepted by a stub library that forwards them over the local network. At that point you're making more work for yourself than debugging over SSH. | ||||||||
| ||||||||