| ▲ | MiniMax M2.5 released: 80.2% in SWE-bench Verified(minimax.io) | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| 134 points by denysvitali 4 hours ago | 38 comments | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | sinuhe69 3 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
I hope better and cheaper models will be widely available because competition is good for the business. However, I'm more cautious about benchmark claims. MiniMax 2.1 is decent, but one can really not call it smart. The more critical issue is that MiniMax 2 and 2.1 have the strong tendency to reward hacking, often write nonsensical test report while the tests actually failed. And sometimes it changed the existing code base to make its new code "pass", when it actually should fix its own code instead. Artificial Analysis put MiniMax 2.1 Coding index on 33, far behind frontier models and I feel it's about right. [1] | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | jbellis 19 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
M2 was one of the most benchmaxxed models we've seen. Huge gap between SWE-B results and tasks it hasn't been trained on. We'll put 2.5 on the list. https://brokk.ai/power-ranking | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | simonw 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Pelican is recognizable but not great, bicycle frame is missing a bar: https://gist.github.com/simonw/61b7953f29a0b7fee1f232f6d9826... | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | mythz 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Really looked forward to this release as MiniMax M2.1 is currently my most used model thanks to it being fast, cheap and excellent at tool calling. Whilst I still use Antigravity + Claude for development, I reach for MiniMax first in my AI workflows, GLM for code tasks and Kimi K2.5 when deep English analysis is needed. Not self-hosting yet, but I prefer using Chinese OSS models for AI workflows because of the potential to self-host in future if needed. Also using it to power my openclaw assistant since IMO it has the best balance of speed, quality and cost: > It costs just $1 to run the model continuously for an hour at 100 tokens/sec. At 50 tokens/sec, the cost drops to $0.30. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | logicprog 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Hm. The benchmarks look too good to be true and a lot of the things they say about the way they train this model sound interesting, but it's hard to say how actually novel they are. Generally, I sort of calibrate how much salt I take benchmarks with based on the objective properties of the model and my past experiences with models from the same lab. For instance, I'm inclined to generally believe Kimi K2.5's benchmarks, because I've found that their models tend to be extremely good qualitatively and feel actually well-rounded and intelligent instead of brittle and bench-maxed. I'm inclined to give GLM 5 some benefit of the doubt, because while I think their past benchmarks have overstated their models' capabilities, I've also found their models relatively competent, and they 2X'd the size of their models, as well as introduced a new architecture and raised the number of active parameters, which makes me feel like there is a possibility they could actually meet the benchmarks they are claiming. Meanwhile, I've never found MiniMax remotely competent. It's always been extremely brittle, tended to screw up edits and misformat even simple JavaScript code, get into error loops, and quickly get context rot. And it's also simply just too small, in my opinion, to see the kind of performance they are claiming. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | mchusma an hour ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
This is cool, but they mentioned affordability, and said this is about $1/hour to run, which is about what I pay for claude code on $200/mo plan. This is not literally true, sometimes I'm running up to 3 concurrent intermittently throughout the day for maybe 60 hours per week. So I do believe if there is something that comes up that is literally continuous, would be interesting, but I'm not sure about it right now. I would be curious if anyone has anything they would literally use running 24/7. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | 3adawi 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Wish my company allowed more of these LLMs through Github Copilot, stuck with OpenAi, Anthropic and Google LLMs where they burn my credit one week into the month | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | motbus3 37 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Everyone is using this sort of let me group the plots weirdly instead of sorting them to make harder to compare. I see you folks | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | thedangler 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Wouldn't it be nice if we have language specific llms that work on average computers. Like LLM that only trained on Python 3+, certain frameworks, certain code repos. Then you can use a different model for searching the internet to implement different things to cut down on costs. Maybe I have no idea what I'm talking about lol | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | OsrsNeedsf2P 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
> M2.5-Lightning [...] costs $0.3 per million input tokens and $2.4 per million output tokens. M2.5 [...] costs half that. Both model versions support caching. Based on output price, the cost of M2.5 is one-tenth to one-twentieth that of Opus, Gemini 3 Pro, and GPT-5. Huge - if not groundbreaking - if the benchmark stats are true. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | denysvitali 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
Btw, the model is free on OpenCode for now | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | tgrowazay 15 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
$1/hr sounds suspiciously close to a price of one A100 80GB GPU. Maybe an 8x node assuming batching >= 8 users per node. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | aliljet an hour ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
I wonder if these are starting to get reasonable enough to use locally? | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | rbren 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
A reasonably sized OSS model that's this good at coding is a HUGE step forward. We've done some vibe checks on it with OpenHands and it indeed performs roughly as good as Sonnet 4.5. OSS models are catching up | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | jhack 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
And it's available on their coding plans, even the cheapest one. | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | turnsout 4 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||
With the GLM news yesterday and now this, I'd love to try out one of these models, but I'm pretty tied to my Claude Code workflow. I see there's a workaround for GLM, but how are people utilizing MiniMax, especially for coding? | |||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||