▲ | DiabloD3 4 days ago | |||||||
I suggest figuring out what your configuration problem is. Which llama.cpp flags are you using, because I am absolutely not having the same bug you are. | ||||||||
▲ | EnPissant 4 days ago | parent [-] | |||||||
It's not a bug. It's the reality of token generation. It's bottlenecked by memory bandwidth. Please publish your own benchmarks proving me wrong. | ||||||||
|