| ▲ | MarsIronPI 5 hours ago | |
I've had good experience with GLM-4.7 and GLM-5.0. How would you compare them with Qwen 3.5? (If you have any experience with them.) | ||
| ▲ | CamperBob2 3 hours ago | parent [-] | |
No experience with 5 and not much with 4.7, but they both have quite a few advocates over on /r/localllama. Unsloth's GLM-4.7-Flash-BF16.gguf is quite fast on the 6000, at around 100 t/s, but definitely not as smart as the Qwen 3.5 MoE or dense models of similar size. As far as I'm concerned Qwen 3.5 renders most other open models short of perhaps Kimi 2.5 obsolete for general queries, although other models are still said to be better for local agentic use. That, I haven't tried. | ||