Remix.run Logo
MiniMax M2.1: Built for Real-World Complex Tasks, Multi-Language Programming(minimaxi.com)
124 points by 110 8 hours ago | 37 comments
kachapopopow 21 minutes ago | parent | next [-]

I think people should stop comparing to sonnet, but to opus instead since it's so far ahead on producing code I would actually want to use (gemini 3 pro tends to be lacking in generalization and wants things to be using it's own style rather than adapting).

Whatever benchmark opus is ahead in should be treated as a very important metric of proper generalization in models.

viraptor 6 hours ago | parent | prev | next [-]

I've played with this a bit and it's ok. I'd place it somewhere around sonnet 4.5 level, probably below. But with this aggressive pricing you can just run 3 copies to do the same thing, choose the one that succeeded and still come out way ahead with the cost. Not as great as following instructions as Claude models and can get lost, but still "good enough".

I'm very happy with using it to just "do things". When doing in depth debugging or a massive plan is needed, I'd go with something better, but later going through the motions? It works.

gcanyon 4 hours ago | parent | prev | next [-]

Would it kill them to use the words "AI coding agent" somewhere prominent?

"MiniMax M2.1: Significantly Enhanced Multi-Language Programming, Built for Real-World Complex Tasks" could be an IDE, a UI framework, a performance library, or, or...

spoaceman7777 4 hours ago | parent | next [-]

It's not an AI coding agent. It's an LLM that can be used for whatever you'd like, including powering coding agents.

pdyc 2 hours ago | parent | next [-]

That reinforces OP’s point that it isn’t clear from their wording. I initially thought it was a speech model, then I saw Python, etc., and it took me a bit more reading to understand what it actually is

gcanyon 3 hours ago | parent | prev [-]

HA! I almost added a disclaimer to the original message that I wasn't certain in my identification, hence the request/complaint that they didn't make it clear. But I figured the message would be more effective if I "confidently got it wrong" rather than asking, so I went with it.

tw1984 3 hours ago | parent | prev [-]

its main Chinese competitor GLM is like making 50 cents USD each in the past 6 months from its 40 million "developer users", calling your flagship model "AI coding agent" is like telling investors "we are doing this for fun, not for money".

integricho an hour ago | parent | prev | next [-]

Their site crashes my phone browser while scrolling. Is that the expected quality of output of their product?

Tepix 37 minutes ago | parent [-]

Should a website be able to crash a browser?

jondwillis 7 hours ago | parent | prev | next [-]

> MiniMax has been continuously transforming itself in a more AI-native way. The core driving forces of this process are models, Agent scaffolding, and organization. Throughout the exploration process, we have gained increasingly deeper understanding of these three aspects. Today we are releasing updates to the model component, namely MiniMax M2.1, hoping to help more enterprises and individuals find more AI-native ways of working (and living) sooner.

This compresses to: “We are updating our model, MiniMax, to 2.1. Agent harnesses exist and Agents are getting more capable.”

A good model and agent harness, pointed at the task of writing this post, might suggest less verbosity and complexity— it comes off as fake and hype-chasing to me, even if your model is actually good. I disengage there.

I saw yall give a lightning talk recently and it was similarly hype-y. Perhaps this is a translation or cultural thing.

tw1984 6 hours ago | parent | next [-]

so when MiniMax released a pretty capable model, you choose to ignore the model itself and just focus a single sentence they wrote in the release note and started bad mouthing it.

is it a cultural thing?

simlevesque 5 hours ago | parent [-]

If I use a software I need to trust it.

tw1984 4 hours ago | parent [-]

a model is not software, it is a bunch of weights.

you are more than welcomed to pick whatever model or software you choose to trust, that is totally fine. However, that is vastly different from bad mouthing a model or software just because its release note contains a single sentence you don't like.

LoganDark 3 hours ago | parent [-]

The API is software. You don't get the weights.

zaptrem 6 hours ago | parent | prev [-]

Not sure it’s a cultural thing since most of the copy coming out of DeepSeek has been pretty straightforward.

tomcam 6 hours ago | parent | prev | next [-]

I still can’t figure out what it does

yinuoli 5 hours ago | parent | next [-]

It's a neural network model, and it could generate text following a given text.

esafak 6 hours ago | parent | prev | next [-]

It's an LLM for coding.

prmph 6 hours ago | parent | prev [-]

You are not alone

esafak 6 hours ago | parent | prev | next [-]

> It exhibits consistent and stable results in tools such as Claude Code, Droid (Factory AI), Cline, Kilo Code, Roo Code, and BlackBox, while providing reliable support for Context Management mechanisms including Skill.md, Claude.md/agent.md/cursorrule, and Slash Commands.

One of the demos shows them using Claude Code, which is interesting. And the next sections are titled 'Digital Employee' and 'End-to-End Office Automation'. Their ambitions obviously go beyond coding. A sign of things to come...

jimmydoe 5 hours ago | parent [-]

they are going IPO in HKEX in a few weeks. some hype up are necessary, not too far fetched imo, pretty much same as anthropic playbook.

tw1984 3 hours ago | parent [-]

anthropic playbook does include the false claim publicly made by its CEO that "in six months AI would be writing 90 percent of code". he made that claim 10 months ago. it is a criminal offence for intentionally misleading investors in many countries.

MiniMax is like 100x more honest.

sosodev 4 hours ago | parent | prev | next [-]

I’ve spent a little bit of time testing Minimax M2. It’s quite good given the small size but it did make some odd mistakes and struggle with precise instructions.

viraptor 32 minutes ago | parent [-]

This is an announcement for M2.1 not M2. It got a decent bump in agent capabilities.

mr_o47 6 hours ago | parent | prev | next [-]

I won't say it's same on the level of claude models but it's definitely good at coming up with frontend designs

jdright 7 hours ago | parent | prev | next [-]

https://www.minimax.io/news/minimax-m21

Invictus0 5 hours ago | parent | prev | next [-]

How is everyone monitoring the skill/utility of all these different models? I am overwhelmed by how many they are, and the challenge of monitoring their capability across so many different modalities.

redman25 4 hours ago | parent | next [-]

https://www.swebench.com

https://swe-rebench.com

https://livebench.ai/#/

https://eqbench.com/#

https://contextarena.ai/?needles=8

https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

https://artificialanalysis.ai/leaderboards/models

https://gorilla.cs.berkeley.edu/leaderboard.html

https://github.com/lechmazur/confabulations

https://dubesor.de/benchtable

https://help.kagi.com/kagi/ai/llm-benchmark.html

https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard

spoaceman7777 4 hours ago | parent | prev [-]

This is the best summary, in my opinion. You can also see the individual scores on the benchmarks they use to compute their overall scores.

It's nice and simple in the overview mode though. Breaks it down into an intelligence ranking, a coding ranking, and an agentic ranking.

https://artificialanalysis.ai/

boredemployee 4 hours ago | parent | prev | next [-]

Internal Server Error

p-e-w 7 hours ago | parent | prev | next [-]

One of the cited reviews goes:

“We're excited for powerful open-source models like M2.1 […]”

Yet as far as I can tell, this model isn’t open at all. Not even open weights, nevermind open source.

NitpickLawyer 14 minutes ago | parent | next [-]

Repo made public a few minutes ago:

https://huggingface.co/MiniMaxAI/MiniMax-M2.1

viraptor 6 hours ago | parent | prev | next [-]

It's scheduled for release. They jumped the gun with the news. But at far as we know, it's still coming out, just like M2.

p-e-w 6 hours ago | parent [-]

I don’t get it. What’s the holdup? Uploading a model to Hugging Face isn’t exactly difficult.

bearjaws 7 hours ago | parent | prev [-]

Yeah I don't see anyway to download this, ollama has it as cloud only.

monster_truck 7 hours ago | parent | prev [-]

That they are still training models against Objective-C is all the proof you need that it will outlive Swift.

When is someone going to vibe code Objective-C 3.0? Borrowing all of the actual good things that have happened since 2.0 is closer than you'd think thanks to LLVM and friends.

viraptor 6 hours ago | parent [-]

Why would they not? Existing objective-c apps will still need updates and various work. Models are still trained on assembler for architectures that don't meaningfully exist today as well.