| |
| ▲ | EnPissant 5 days ago | parent | next [-] | | MoE models need just as much VRAM as dense models because every token may use a different set of experts. They just run faster. | | |
| ▲ | regularfry 5 days ago | parent [-] | | This isn't quite right: it'll run with the full model loaded to RAM, swapping in the experts as it needs. It has turned out in the past that experts can be stable across more than one token so you're not swapping as much as you'd think. I don't know if that's been confirmed to still be true on recent MoEs, but I wouldn't be surprised. | | |
| ▲ | mcrutcher 5 days ago | parent | next [-] | | Also, though nobody has put the work in yet, the GH200 and GB200 (the NVIDIA "superchips" support exposing their full LPDDR5X and HBM3 as UVM (unified virtual memory) with much more memory bandwidth between LPDDR5X and HBM3 than a typical "instance" using PCIE. UVM can handle "movement" in the background and would be absolutely killer for these MoE architectures, but none of the popular inference engines actually allocate memory correctly for these architectures: cudaMallocManaged() or allow UVM (CUDA) to actually handle movement of data for them (automatic page migration and dynamic data movement) or are architected to avoid pitfalls in this environment (being aware of the implications of CUDA graphs when using UVM). It's really not that much code, though, and all the actual capabilities are there as of about mid this year. I think someone will make this work and it will be a huge efficiency for the right model/workflow combinations (effectively, being able to run 1T parameter MoE models on GB200 NVL4 at "full speed" if your workload has the right characteristics). | |
| ▲ | EnPissant 5 days ago | parent | prev [-] | | What you are describing would be uselessly slow and nobody does that. | | |
| ▲ | DiabloD3 5 days ago | parent | next [-] | | I don't load all the MoE layers onto my GPU, and I have only about a 15% reduction in token generation speed while maintaining a model 2-3 times larger than VRAM alone. | | |
| ▲ | EnPissant 4 days ago | parent [-] | | The slowdown is far more than 15% for token generation. Token generation is mostly bottlenecked by memory bandwidth. Dual channel DDR5-6000 has 96GB/s and A rtx 5090 has 1.8TB/s. See my other comment when I show 5x slowdown in token generation by moving just the experts to the CPU. | | |
| ▲ | DiabloD3 4 days ago | parent [-] | | I suggest figuring out what your configuration problem is. Which llama.cpp flags are you using, because I am absolutely not having the same bug you are. | | |
| ▲ | EnPissant 4 days ago | parent [-] | | It's not a bug. It's the reality of token generation. It's bottlenecked by memory bandwidth. Please publish your own benchmarks proving me wrong. | | |
| ▲ | DiabloD3 3 days ago | parent [-] | | I cannot reproduce your bug on AMD. I'm going to have to conclude this is a vendor issue. |
|
|
|
| |
| ▲ | furyofantares 5 days ago | parent | prev | next [-] | | I do it with gpt-oss-120B on 24 GB VRAM. | | |
| ▲ | EnPissant 4 days ago | parent [-] | | You don't. You run some of the layers on the CPU. | | |
| ▲ | furyofantares 4 days ago | parent [-] | | You're right that I was confused about that. LM Studio defaults to 12/36 layers on the GPU for that model on my machine, but you can crank it to all 36 on the GPU. That does slow it down but I'm not finding it unusable and it seems like it has some advantages - but I doubt I'm going to run it this way. | | |
| ▲ | EnPissant 4 days ago | parent [-] | | FWIW, that's a 80GB model and you also need kv cache. You'd need 96GBish to run on the GPU. | | |
| ▲ | furyofantares 4 days ago | parent [-] | | Do you know if it's doing what was described earlier, when I run it with all layers on GPU - paging an expert in every time the expert changes? Each expert is only 5.1B parameters. | | |
| ▲ | EnPissant 4 days ago | parent | next [-] | | It makes absolutely no sense to do what OP described. The decode stage is bottlenecked on memory bandwidth. Once you pull the weights from system RAM, your work is almost done. To then gigabytes of weights PER TOKEN over PCIE to do some trivial computation on the GPU is crazy. What actually happens is you run some or all of the MoE layers on the CPU from system RAM. This can be tolerable for smaller MoE models, but keeping it all on the GPU will still be 5-10x faster. I'm guessing lmstudio gracefully falls back to running _soemthing_ on the CPU. Hopefully you are running only MoE on the CPU. I've only ever used llama.cpp. | | |
| ▲ | furyofantares 4 days ago | parent [-] | | I tried a few things and checked CPU usage in Task Manager to see how much work the CPU is doing. KV Cache in GPU and 36/36 layers in GPU: CPU usage under 3%. KV Cache in GPU and 35/36 layers in GPU: CPU usage at 35%. KV Cache moved to CPU and 36/36 layers in GPU: CPU usage at 34%. I believe you that it doesn't make sense to do it this way, it is slower, but it doesn't appear to be doing much of anything on the CPU. You say gigabytes of weights PER TOKEN, is that true? I think an expert is about 2 GB, so a new expert is 2 GB, sure - but I might have all the experts for the token already in memory, no? | | |
| ▲ | EnPissant 4 days ago | parent [-] | | gpt-oss-120b chooses 4 experts per token and combines them. I don't know how lmstudio works. I only know the fundamentals. There is not way it's sending experts to the GPU per token. Also, the CPU doesn't have much work to do. It's mostly waiting on memory. | | |
| ▲ | furyofantares 4 days ago | parent [-] | | > There is not way it's sending experts to the GPU per token. Right, it seems like either experts are stable across sequential tokens fairly often, or there's more than 4 experts in memory and it's stable within the in-memory experts for sequential tokens fairly often, like the poster said. |
|
|
| |
| ▲ | furyofantares 4 days ago | parent | prev [-] | | ^ Er, misspoke, each expert is at most .9 B parameters there's 128 experts. 5.1 B is number of active parameters (4 experts + some other parameters). |
|
|
|
|
| |
| ▲ | bigyabai 4 days ago | parent | prev | next [-] | | I run the 30B Qwen3 on my 8GB Nvidia GPU and get a shockingly high tok/s. | | |
| ▲ | EnPissant 4 days ago | parent [-] | | For contrast, I get the following for a rtx 5090 and 30b qwen3 coder quantized to ~4 bits: - Prompt processing 65k tokens: 4818 tokens/s - Token generation 8k tokens: 221 tokens/s If I offload just the experts to run on the CPU I get: - Prompt processing 65k tokens: 3039 tokens/s - Token generation 8k tokens: 42.85 tokens/s As you can see, token generation is over 5x slower. This is only using ~5.5GB VRAM, so the token generation could be sped up a small amount by moving a few of the experts onto the GPU. |
| |
| ▲ | littlestymaar 5 days ago | parent | prev | next [-] | | AFAIK many people on /r/localLlama do pretty much that. | |
| ▲ | zettabomb 5 days ago | parent | prev | next [-] | | llama.cpp has built-in support for doing this, and it works quite well. Lots of people running LLMs on limited local hardware use it. | | |
| ▲ | EnPissant 4 days ago | parent [-] | | llama.cpp has support for running some of or all of the layers on the CPU. It does not swap them into the GPU as needed. |
| |
| ▲ | regularfry 5 days ago | parent | prev [-] | | It's neither hypothetical nor rare. | | |
|
|
| |
| ▲ | DiabloD3 4 days ago | parent | prev [-] | | Same calculation, basically. Any given ~30B model is going to use the same VRAM (assuming loading it all into VRAM, which MoEs do not need to do), is going to be the same size |
|