| ▲ | devinprater 3 days ago | |
Wow, just 32B? This could almost run on a good device with 64 GB RAM. Once it gets to Ollama I'll have to see just what I can get out of this. | ||
| ▲ | plipt 3 days ago | parent | next [-] | |
I see that their HuggingFace link goes to some Qwen3-Omni-30B-A3B models that show a last updated date of September The benchmark table in their article shows Qwen3-Omni-Flash-2025-12-01 (and the previous Flash) as beating Qwen3-235B-A22B. How is that possible if this is only a 30B-A3B model? Also confusing how that comparison column starts out with one model but changes them as you descend down the table. I don't see any FLASH variant listed on their Hugginface. Am i just missing it or are these specifying a model only used for their API service and there are no open weights to download? | ||
| ▲ | apexalpha 3 days ago | parent | prev [-] | |
I run these on a 48gb Mac because of the universal ram. | ||