▲ | EnPissant 4 days ago | ||||||||||||||||
The slowdown is far more than 15% for token generation. Token generation is mostly bottlenecked by memory bandwidth. Dual channel DDR5-6000 has 96GB/s and A rtx 5090 has 1.8TB/s. See my other comment when I show 5x slowdown in token generation by moving just the experts to the CPU. | |||||||||||||||||
▲ | DiabloD3 4 days ago | parent [-] | ||||||||||||||||
I suggest figuring out what your configuration problem is. Which llama.cpp flags are you using, because I am absolutely not having the same bug you are. | |||||||||||||||||
|