| ▲ | KronisLV 4 hours ago | |
For it's size, that's really good! Though I bet it being a dense model probably helps a lot, if it was MoE at that size, I bet the benchmark performance would go quite a bit down (which consequently would also mean that I'd at least be able to run it with decent tokens/second, with the bunch of Nvidia L4 cards available to me, which presently are only okay with MoE models). It's cool that they added comparisons to their own Mistral Small 4 119B A7B, which kind of shows that! They could have also included comparisons to something like Qwen Coder Next 80B A3B (or maybe the newer Qwen 3.6 35B A3B, or the 27B dense one), maybe DeepSeek V4 Flash 284B A13B, or the older GPT-OSS 120B A5B to illustrate that difference and where their model sits even better, it would probably give a more positive picture than just comparing themselves against a bunch of bigger models! Come to think of it, alongside throwing some money at DeepSeek not just Anthropic, I probably should get a Mistral subscription as well sometime, to see how they perform on various tasks - cause they seem pretty cost effective and it's nice to support at least some EU orgs: https://mistral.ai/pricing | ||