| ▲ | 0xbadcafebee 4 hours ago | |||||||
Mistral models are definitely good enough. Most people fall for what I call the SOTA Logical Fallacy: whenever there is a 'better model', they think they need to use it, when less-powerful models actually perform the same tasks just as well. (it's an inverse form of the Shifting Baseline Syndrome: every time a new model comes out, people shift their baseline of what is acceptable, despite the fact that a previous baseline was acceptable for the same task) Devstral Small 2 was (and remains) a particularly strong small coding model, even beating larger open weights. Mistral's "problem" is marketing; other providers ship model updates constantly so they remain in the news and seem like they're "beating" the competition. And it works: people get emotionally attached to brands and models, deciding who's better in the court of popular opinion, and that drives their choices (& dollars). | ||||||||
| ▲ | badsectoracula 26 minutes ago | parent | next [-] | |||||||
TBH sometimes i feel like i'm "emotionally attached" to Mistral's models because i always end up using them :-P. However that is because, as you wrote, their small models (i only use local stuff) are very strong. In fact i was trying Qwen3.6 27B recently and while it is nice that it can do tool calls during the reasoning process (i had it confirm its thoughts by writing Python code) it often ended up confusing itself (regardless of tool calls) during reasoning, ending up in loops where it questions itself over and over endlessly. Devstral Small 2 however just works, for the most part. Qwen3.6 27B can probably handle more complex tasks (when i asked it as a test to write a function that checks for collision between two AABBs in C and gave it a tool to call Python code for confirmation, it actually wrote a Python script that writes C code with the tests, then calls GCC to compile the C code and runs the binary to run the tests, which is something Mistral's small models couldn't do) but i always felt i can just leave DS2 doing stuff in the background (or when i'm doing something else) and it'll produce something relatively useful whereas the little time i spent with Qwen3.6 27B it felt more "unstable" (and much slower, both because of literally slower inference and because of endless reams of text). Recently i also started using Ministral 3B and 14B - these can do some reasoning too and for very simple stuff Ministral 3B is very fast (i actually didn't expect a 3B model to be anything more than novelty) and have some vision abilities (though they're quite mediocre at vision so i haven't found much use for this - passing something via GLM-OCR to extract all text and feed it to another model feels more practical). Also as i wrote in another comment, every Mistral model i've tried never questioned me, which i certainly prefer | ||||||||
| ▲ | amunozo 24 minutes ago | parent | prev | next [-] | |||||||
For certains tasks that are not hard but depend a clear specification, it's even better to haver less capable model because it forces you to do a better description of what you want, ending up with a better results. I will defend my PhD thesis soon and I will buy a yearly Mistral subscription at a student price to get it for cheap. | ||||||||
| ▲ | tmikaeld 2 hours ago | parent | prev [-] | |||||||
My biggest issue with Devstral and even their biggest model is that they’re dangerous unless closely directed and reviewed and i mean CLOSELY. Unfortunately mistral models will believe and do anything. See: https://petergpt.github.io/bullshit-benchmark/viewer/index.v... See some of the test results, it’s horrifying | ||||||||
| ||||||||