|
| ▲ | skippyboxedhero an hour ago | parent | next [-] |
| Xiaomi, Nvidia Nemotron, Minimax, lots of other smaller ones too. There are massive economic incentives to shrink models because they can be provided faster and at lower cost. I think even with the money going in, there has to be some revenue supporting that development somewhere. And users are now looking at the cost. I have been using Anthropic Max for most of this year after checking out some of these other models, it is clearly overpriced (I would also say their moat of Claude Code has been breached). And Anthropic's API pricing is completely crazy when you use some of the paradigms that they suggest (agents/commands/etc) i.e. token usage is going up so efficient models are driving growth. |
|
| ▲ | hedgehog 7 hours ago | parent | prev | next [-] |
| Smaller open-weights models are also improving noticeably (like Qwen3 Coder 30B), the improvements are happening at all sizes. |
| |
| ▲ | cmrdporcupine 7 hours ago | parent [-] | | Devstral Small 24b looks promising as something I want to try fine tuning on DSLs, etc. and then embedding in tooling. | | |
| ▲ | hedgehog 4 hours ago | parent [-] | | I haven't tried it yet, but yes. Qwen3 Next 80B works decently in my testing, and fast. I had mixed results with the new Nemotron, but it and the new Qwen models are both very fast to run. |
|
|
|
| ▲ | Imustaskforhelp 6 hours ago | parent | prev [-] |
| How much billion parameter model is gemini 3 flash, I can't seem to find info about it online. |