| ▲ | kgeist 2 hours ago | |
>I suggest Q5 quantization at most from my experience. Q4 works on short responses but gets weird in longer conversations. There are dynamic quants such as Unsloth which quantize only certain layers to Q4. Some layers are more sensitive to quantization than others. Smaller models are more sensitive to quantization than the larger ones. There are also different quantization algorithms, with different levels of degradation. So I think it's somewhat wrong to put "Q4" under one umbrella. It all depends. | ||
| ▲ | Aurornis 2 hours ago | parent [-] | |
I should clarify that I'm referring generically to the types of quantizations used in local LLM inference, including those from Unsloth. Nobody actually quantizes every layer to Q4 in a Q4 quant. | ||