▲ | oktoberpaard 4 days ago | |||||||||||||
I’m running Ollama on 2 eGPUs over Thunderbolt. Works well for me. You’re still dealing with an NVDIA device, of course. The connection type is not going to change that hassle. | ||||||||||||||
▲ | pdimitar 4 days ago | parent [-] | |||||||||||||
Thank you for the validation. As much as I don't like NVIDIA's shenanigans on Linux, having a local LLM is very tempting and I might put my ideological problems to rest over it. Though I have to ask: why two eGPUs? Is the LLM software smart enough to be able to use any combination of GPUs you point it at? | ||||||||||||||
|