| ▲ | rhdunn 2 hours ago | |
My experience is that at Q5 and lower you start to see noticeable degredation in performance/quality. It's especially noticeable at Q4 where models will easily get trapped in repeating token loops. I generally use Q6. [1] https://medium.com/@paul.ilvez/demystifying-llm-quantization... | ||
| ▲ | awestroke an hour ago | parent [-] | |
Is your experience with this new quantization approach from Intel? Otherwise your comment is a bit offtopic at best, misleading at worst. | ||