Remix.run Logo
DiabloD3 5 days ago

I don't load all the MoE layers onto my GPU, and I have only about a 15% reduction in token generation speed while maintaining a model 2-3 times larger than VRAM alone.

EnPissant 4 days ago | parent [-]

The slowdown is far more than 15% for token generation. Token generation is mostly bottlenecked by memory bandwidth. Dual channel DDR5-6000 has 96GB/s and A rtx 5090 has 1.8TB/s. See my other comment when I show 5x slowdown in token generation by moving just the experts to the CPU.

DiabloD3 4 days ago | parent [-]

I suggest figuring out what your configuration problem is.

Which llama.cpp flags are you using, because I am absolutely not having the same bug you are.

EnPissant 4 days ago | parent [-]

It's not a bug. It's the reality of token generation. It's bottlenecked by memory bandwidth.

Please publish your own benchmarks proving me wrong.

DiabloD3 3 days ago | parent [-]

I cannot reproduce your bug on AMD. I'm going to have to conclude this is a vendor issue.