|
| ▲ | pdimitar 4 days ago | parent | next [-] |
| Do you happen to know if it can be run via an eGPU enclosure with f.ex. RTX 5090 inside, under Linux? I'm considering buying a Linux workstation lately and I want it full AMD. But if I can just plug an NVIDIA card via an eGPU card for self-hosting LLMs then that would be amazing. |
| |
| ▲ | oktoberpaard 4 days ago | parent | next [-] | | I’m running Ollama on 2 eGPUs over Thunderbolt. Works well for me. You’re still dealing with an NVDIA device, of course. The connection type is not going to change that hassle. | | |
| ▲ | pdimitar 4 days ago | parent [-] | | Thank you for the validation. As much as I don't like NVIDIA's shenanigans on Linux, having a local LLM is very tempting and I might put my ideological problems to rest over it. Though I have to ask: why two eGPUs? Is the LLM software smart enough to be able to use any combination of GPUs you point it at? | | |
| ▲ | arcanemachiner 4 days ago | parent | next [-] | | Yes, Ollama is very plug-and-play when it comes to multi GPU. llama.cpp probably is too, but I haven't tried it with a bigger model yet. | |
| ▲ | SV_BubbleTime 3 days ago | parent | prev [-] | | Even today some progress was released on parallelizing WAN video generation over multiple GPUs. LLMs are way easier to split up. |
|
| |
| ▲ | bigyabai 4 days ago | parent | prev | next [-] | | Sure, though you'll be bottlenecked by the interconnect speed if you're tiling between system memory and the dGPU memory. That shouldn't be an issue for the 30B model, but would definitely be an issue for the 480B-sized models. | |
| ▲ | gunalx 4 days ago | parent | prev [-] | | You would still need drivers and all the stuff difficult with nvidia in linux with a egpu. (Its not nessecarily terrible just suboptimal) Rather just add the second GPU in the Workstation, or just run the llm in your AMD GPU. | | |
| ▲ | pdimitar 4 days ago | parent [-] | | Oh, we can run LLMs efficiently with AMD GPUs now? Pretty cool, I haven't been following, thank you. | | |
| ▲ | DarkFuture 4 days ago | parent | next [-] | | I've been running LLM models on my Radeon 7600 XT 16GB for past 2-3 months without issues (Windows 11). I've been using llama.cpp only. The only thing from AMD I installed (apart from latest Radeon drivers) is the "AMD HIP SDK" (very straight forward installer). After unzipping (the zip from GitHub releases page must contain hip-radeon in the name) all I do is this: llama-server.exe -ngl 99 -m Qwen3-14B-Q6_K.gguf And then connect to llamacpp via browser to localhost:8080 for the WebUI (its basic but does the job, screenshots can be found on Google). You can connect more advanced interfaces to it because llama.cpp actually has OpenAI-compatible API. | |
| ▲ | Plasmoid2000ad 3 days ago | parent | prev | next [-] | | Yes - I'm running a LM Studio on windows on a 6800xt, and everything works more-or-less out of the box using always using Vulkan llama.cpp on the gpu I believe. There's also ROCm. That's not working for me in LM Studio at the moment. I used that early last year to get some LLMs and stable diffusion running. As far as I can tell, it was faster before, but Vulkan implementations have caught up or something - so much the mucking about isn't often worth it. I believe ROCm is hit or miss for a lot of people, especially on windows. | |
| ▲ | bavell 3 days ago | parent | prev | next [-] | | IDK about "efficiently" but we've been able to run llms locally with AMD for 1.5-2 years now | |
| ▲ | green7ea 3 days ago | parent | prev [-] | | llama.cpp and lmstudio have a Vulkan backend which is pretty fast. I'm using it to run models on a Strix Halo laptop and it works pretty well. |
|
|
|
|
| ▲ | indigodaddy 4 days ago | parent | prev | next [-] |
| Do we get these good qwen models when using qwen-code CLI tool and authing via qwen.ai account? |
| |
| ▲ | bigyabai 3 days ago | parent | next [-] | | I'm not sure, probably? | |
| ▲ | esafak 3 days ago | parent | prev [-] | | You do not need qwen-code or qwen.ai to use them; openrouter + opencode suffice. | | |
|
|
| ▲ | decide1000 4 days ago | parent | prev | next [-] |
| I use it on a 24gb gpu Tesla P40. Very happy with the result. |
| |
| ▲ | hkt 4 days ago | parent [-] | | Out of interest, roughly how many tokens per second do you get on that? | | |
| ▲ | edude03 4 days ago | parent [-] | | Like 4. Definitely single digit. The P40s are slow af | | |
| ▲ | coolspot 4 days ago | parent [-] | | P40 has memory bandwidth of 346GB/s which means it should be able to do around 14+ t/s running a 24 GB model+context. |
|
|
|
|
| ▲ | tomr75 4 days ago | parent | prev [-] |
| With qwen code? |