| ▲ | jrop 6 hours ago | ||||||||||||||||||||||
I don't buy this. I've long wondered if the larger models, while exhibiting more useful knowledge, are not more wasteful as we greedily explore the frontier of "bigger is getting us better results, make it bigger". Qwen3-Coder-Next seems to be a point for that thought: we need to spend some time exploring what smaller models are capable of. Perhaps I'm grossly wrong -- I guess time will tell. | |||||||||||||||||||||||
| ▲ | bityard 5 hours ago | parent | next [-] | ||||||||||||||||||||||
You are not wrong, small models can be trained for niche use cases and there are lots of people and companies doing that. The problem is that you need one of those for each use case whereas the bigger models can cover a bigger problem space. There is also the counter-intuitive phenomenon where training a model on a wider variety of content than apparently necessary for the task makes it better somehow. For example, models trained only on English content exhibit measurably worse performance at writing sensible English than those trained on a handful of languages, even when controlling for the size of the training set. It doesn't make sense to me, but it probably does to credentialed AI researchers who know what's going on under the hood. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | segmondy 4 hours ago | parent | prev [-] | ||||||||||||||||||||||
eventually we will have smarter smaller models, but as of now, larger models are smarter by far. time and experience has already answered that. | |||||||||||||||||||||||
| |||||||||||||||||||||||