| ▲ | armanj 9 hours ago |
| I recall a Qwen exec posted a public poll on Twitter, asking which model from Qwen3.6 you want to see open-sourced; and the 27b variant was by far the most popular choice. Not sure why they ignored it lol. |
|
| ▲ | zozbot234 8 hours ago | parent | next [-] |
| The 27B model is dense. Releasing a dense model first would be terrible marketing, whereas 35A3B is a lot smarter and more quick-witted by comparison! |
| |
| ▲ | arxell 8 hours ago | parent | next [-] | | Each has it's pros and cons. Dense models of equivalent total size obviously do run slower if all else is equal, however, the fact is that 35A3B is absolutely not 'a lot smarter'... in fact, if you set aside the slower inference rates, Qwen3.5 27B is arguably more intelligent and reliable. I use both regularly on a Strix Halo system... the Just see the comparison table here: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF . The problem that you have to acknowledge if running locally (especially for coding tasks) is that your primary bottleneck quickly becomes prompt processing (NOT token generation) and here the differences between dense and MOE are variable and usually negligible. | | |
| ▲ | nunodonato 6 hours ago | parent | next [-] | | I was hoping this would be the model to replace our Qwen3.5-27B, but the difference is marginally small. Too risky, I'll pass and wait for the release of a dense version. | |
| ▲ | Mikealcl 7 hours ago | parent | prev [-] | | Could you explain why prompt processing is the bottle neck please? I've seen this behavior but I don't understand why. | | |
| ▲ | zozbot234 6 hours ago | parent [-] | | You should be able to save a lot on prefill by stashing KV-cache shared prefixes (since KV-cache for plain transformers is an append-only structure) to near-line bulk storage and fetching them in as needed. Not sure why local AI engines don't do this already since it's a natural extension of session save/restore and what's usually called prompt caching. | | |
| ▲ | FuckButtons a minute ago | parent [-] | | if I understand you correctly, this is essentially what vllm does with their paged cache, if I’ve misunderstood I apologize. |
|
|
| |
| ▲ | JKCalhoun 7 hours ago | parent | prev | next [-] | | "…whereas 35A3B is a lot smarter…" Must. Parse. Is this a 35 billion parameter model that needs only 3 billion parameters to be active? (Trying to keep up with this stuff.) EDIT: A later comment seems to clarify: "It's a MoE model and the A3B stands for 3 Billion active parameters…" | |
| ▲ | halJordan 4 hours ago | parent | prev | next [-] | | That makes no sense. If you were just going to release the "more hype-able because it's quicker" model then why have a a poll. | |
| ▲ | Miraste 8 hours ago | parent | prev [-] | | What? 35B-A3B is not nearly as smart as 27B. | | |
| ▲ | stratos123 2 hours ago | parent | next [-] | | One interesting thing about Qwen3 is that looking at the benchmarks, the 35B-A3B models seem to be only a bit worse than the dense 27B ones. This is very different from Gemma 4, where the 26B-A4B model is much worse on several benchmarks (e.g. Codeforces, HLE) than 31B. | | |
| ▲ | zozbot234 2 hours ago | parent [-] | | > This is very different from Gemma 4, where the 26B-A4B model is much worse on several benchmarks (e.g. Codeforces, HLE) than 31B. Wouldn't you totally expect that, since 26A4B is lower on both total and active params? The more sensible comparison would pit Qwen 27B against Gemma 31B and Gemma 26A4B against Qwen 35A3B. |
| |
| ▲ | ekianjo 8 hours ago | parent | prev | next [-] | | yeah the 27B feels like something completely different. If you use it on long context tasks it performs WAY better than 35b-a3b | | |
| ▲ | Der_Einzige 7 hours ago | parent [-] | | I've been telling analysts/investors for a long time that dense architectures aren't "worse" than sparse MoEs and to continue to anticipate the see-saw of releases on those two sub-architectures. Glad to continuously be vindicated on this one. For those who don't believe me. Go take a look at the logprobs of a MoE model and a dense model and let me know if you can notice anything. Researchers sure did. |
| |
| ▲ | zkmon 8 hours ago | parent | prev [-] | | Yes. |
|
|
|
| ▲ | arunkant 8 hours ago | parent | prev | next [-] |
| Probably coming next |
|
| ▲ | zkmon 8 hours ago | parent | prev [-] |
| I'm guessing 3.5-27b would beat 3.6-35b. MoE is a bad idea. Because for the same VRAM 27b would leave a lot more room, and the quality of work directly depends on context size, not just the "B" number. |
| |
| ▲ | zozbot234 8 hours ago | parent | next [-] | | MoE is not a bad idea for local inference if you have fast storage to offload to, and this is quickly becoming feasible with PCIe 5.0 interconnect. | |
| ▲ | perbu 6 hours ago | parent | prev [-] | | MoE is excellent for the unified memory inference hardware like DGX Sparc, Apple Studio, etc. Large memory size means you can have quite a few B's and the smaller experts keeps those tokens flowing fast. |
|