| ▲ | SwellJoe 3 hours ago |
| Qwen is better at some things (code, in particular), but Gemma has better prose and better vision. At least, it feels that way to me. |
|
| ▲ | zobzu 3 hours ago | parent [-] |
| gemma is also just way faster. i dont wanna wait 10min to get a 5-10% better answer (and sometimes, actually worse answer). best is to use your own model router atm, depending on the task |
| |
| ▲ | SwellJoe 2 hours ago | parent [-] | | I'm pretty sure Qwen is faster? The MoE version of Qwen is 3B active, while Gemma 4 is 4B active. Similarly, the dense Qwen is 27B while Gemma is 31B. All else being equal (though I know all else isn't equal), Qwen should be faster in both cases. I haven't actually measured with any precision, but on my AMD hardware (Strix Halo or dual Radeon Pro V620) they seem quite similar in both cases...both MoE models are fast enough for interactive use, both dense models are notably smarter but much slower, long time to first response and single-digit tokens per second once it starts talking. |
|