Remix.run Logo
mythz 8 hours ago

Really looked forward to this release as MiniMax M2.1 is currently my most used model thanks to it being fast, cheap and excellent at tool calling. Whilst I still use Antigravity + Claude for development, I reach for MiniMax first in my AI workflows, GLM for code tasks and Kimi K2.5 when deep English analysis is needed.

Not self-hosting yet, but I prefer using Chinese OSS models for AI workflows because of the potential to self-host in future if needed. Also using it to power my openclaw assistant since IMO it has the best balance of speed, quality and cost:

> It costs just $1 to run the model continuously for an hour at 100 tokens/sec. At 50 tokens/sec, the cost drops to $0.30.

algo_trader 6 hours ago | parent | next [-]

> MiniMax first in my AI workflows, GLM for code tasks and Kimi K2.5

Its good to have these models to keep the frontier labs honest! Can i ask if you use the API or a monthly plan? Do the monthly plan throttle/reset ?

edit: i agree that MM2.1 most economic, and K2.5 generally the strongest

mythz 6 hours ago | parent [-]

Using a coding plan, haven't noticed any throttling and very happy with the performance. They publish the quotas for each of their plans on their website [1]:

- $10/mo: 100 prompts / 5 hours

- $20/mo: 300 prompts / 5 hours

- $50/mo: 1000 prompts / 5 hours

[1] https://platform.minimax.io/docs/guides/pricing-coding-plan

miroljub 5 hours ago | parent [-]

They count one prompt as 15 requests. That gives you exactly 1500 API requests for 5 hours. Tokens are not counted.

user2722 7 hours ago | parent | prev [-]

!!!!!! Incredibly cheap!!!!!

I'll have to look for it in OpenRouter.

amunozo 7 hours ago | parent [-]

For the moment is free in Opencode, if you want ot try it.