Remix.run Logo
raffael_de 5 days ago

Every time I tried a Mistral model I was left rather underwhelmed and just went back to the usual options. Seems like their only USP at this point is Made in EU.

simion314 5 days ago | parent | next [-]

>Seems like their only USP at this point is Made in EU.

They are also releasing model weights for most of their models, where companies like Antropic and until recently OpenAI were FUDing the world that open source will doom us all.

Mistral smartest model is still behind Google, Antropic but they will catch up.

swores 5 days ago | parent [-]

Not a big deal, but FYI there's an 'h' in the company name "Anthropic".

Inspired by the Greek word for human: Anthropos / ἄνθρωπος, the same etymology as English words like anthropology, the study of humans.

(I'd hazard a guess that your first language is something like a Romance language such as French, where people would pronounce that "anthro..." as if there is no h? So a particularly reasonable letter to forget when typing!)

lbreakjai 5 days ago | parent | next [-]

We generally kept the traces of the original latin/greek in the french spelling. It's "anthropologie" in french, but "antropología" in spanish, or "antropologia" in italian and portuguese.

Which makes it particularly hard to write, compared to other latin languages.

swores 5 days ago | parent [-]

Interesting! French is the only one I'm familiar with, and just assumed it was representative of the others. Thanks for the extra context

simion314 5 days ago | parent | prev [-]

yes, my first language is Romanian, a romance language , and add on that my complete disrespect for the company anti open source FUD so I never waste my time to double check my spelling.

epolanski 5 days ago | parent | prev | next [-]

Not my experience and I have compared OpenAI/Anthropic/Mistral quite some.

Speed and cost is a relevant factor. I have pipelines that need to execute tons of completions and has to produce summaries. Mistral Small is great at it and the responses are lightning fast.

For that use case if you went with US models it would be way more expensive and slow while not offering any benefit at all.

tormeh 5 days ago | parent | prev | next [-]

If you give them money, Mistral will help you set up their models up in your basement. Also they're really cheap. That's their USP, I think.

greyb 5 days ago | parent [-]

I'm curious how relevant this actually is as a USP with the proliferation of open weight models and a glut of technical consultants.

baq 5 days ago | parent | prev [-]

they're also fast.

raffael_de 5 days ago | parent [-]

So are Gemini Flash (Lite) and GPT mini/nano.

threeducks 5 days ago | parent [-]

    - 1100    tokens/second Mistral Flash Answers https://www.youtube.com/watch?v=CC_F2umJH58
    -  189.9  tokens/second Gemini 2.5 Flash Lite https://openrouter.ai/google/gemini-2.5-flash-lite
    -   45.92 tokens/second GPT-5 Nano https://openrouter.ai/openai/gpt-5-nano
    - 1799    tokens/second gpt-oss-120b (via Cerebras) https://openrouter.ai/openai/gpt-oss-120b
    -  666.8  tokens/second Qwen3 235B A22B Thinking 2507 (via Cerebras) https://openrouter.ai/qwen/qwen3-235b-a22b-thinking-2507
Gemini 2.5 Flash Lite and GPT-5 Nano seem to be comparatively slow.

That being said, I can not find non-marketing numbers for Mistral Flash Answers. Real-world tps are likely lower, so this comparison chart is not very fair.