| ▲ | try-working 15 hours ago | |||||||
What's gone unnoticed with the Gemma 4 release is that it crowned Qwen as the small model SOTA. So for the first time a Chinese lab holds the frontier in a model category. It is a minor DeepSeek model, because western labs have to catch up with Alibaba now. | ||||||||
| ▲ | guteubvkk 14 hours ago | parent | next [-] | |||||||
on my 16 GB GPU Gemma 4 is better and faster than Qwen 3.5, both at 4-bit so it's not so clear cut | ||||||||
| ||||||||
| ▲ | lostmsu 15 hours ago | parent | prev | next [-] | |||||||
It's unnoticed because it didn't. In Google's own benchmarks they are on par, and I've seen 3rd party benchmarks where Qwen beats G4 with high margin | ||||||||
| ▲ | irishcoffee 12 hours ago | parent | prev [-] | |||||||
The day a western anything will need to catch up with alibaba will be a notable day indeed. Also, this will never happen. | ||||||||