Remix.run Logo
Best AI coding plan alternative to Claude and ChatGPT
9 points by Jsttan 11 hours ago | 7 comments

With the lowering usage limit in Claude, I am thinking of jumping ship to Chinese AI, since the benchmark is already very near compared to Sonnet or Haiku 4.5 , but for a fraction of the price. I am not worried about where is my data ending up through, I am focused on performance and usage limit. I mostly use it for coding and research.

However, I am currently deciding on which to use, and would love any recommendations from anyone that are using any or many of these AI,

- GLM Coding Plan (Z AI): $18/month Lite Plan - BytePlus: $10 ModelArk Coding Plan - Kimi AI: $19/month Moderato Coding Plan - MiniMax: $20 Plus Standard Plan

I would like to ask, is the performance good? Is it worth the value? And how is the usage limit? Also, if anyone have any good recommendation on AI plan that is only in Chinese language, I don’t mind too, as I can understand Chinese.

irthomasthomas 4 hours ago | parent | next [-]

I like chutes. I think I get about 5K prompts per day for $20/m, though they may have stricter limits for new customers.

This gives you practically unlimited usage of frontier models like kimi, deepseek, glm. Their models are always fullsize, never quantised except where the lab themselves provides an 4bit or 8bit model. You can see from the model config exactly which hf model it pulls and the serving co figuration used.

Prompts are encrypted using Trusted Execution Environment (TEE). So neither a model host or neighbour can view your prompts. That's as close as you can get to local level privacy in the cloud.

JSR_FDED 10 hours ago | parent | prev | next [-]

I get Kimi through OpenCode Zen (kind of like openrouter for the OpenCode harness), periodically top up $20 and laugh every time I see my balance go down by 3 cents for something I would have happily paid someone $30.

serf 10 hours ago | parent | prev | next [-]

nous portal or openrouter with a harness that uses intelligent multi provider requests,a local memory system, and pre-sub context compaction on input. if you do similar stuff often your token usage will drop after awhile of using a memory subsystem like hindsight or honcho quite a bit, and even more if you're using your harness to build relevant skills for the repeated tasks.

fatbrowndog 11 hours ago | parent | prev | next [-]

not good. I use DeepSeek's plan, Kimi AI, OpenRouter and it seemly consumes more tokens, than Claude's.

I consume Claude ~30% per day in of, 1 week, Max,x20. Equivalent in Kimi Ai, is I consume 60% in one day, in one week.

DeepSeek/Latest, 95% discount, with cache, I rack up ~$60/day before I stopped.

I don't know how Claude compute their daily limits, it seems much cheaper.

Jsttan 10 hours ago | parent [-]

Which DeepSeek plan did you use? I been trying to find a DeepSeek for a while but with no success. I tried to use Claude $20 plan before, token burn like it is air, would be quite hard to believe anything else would burn so fast?

fatbrowndog 2 hours ago | parent [-]

I'm using the deepseek-v4-pro model is currently offered at a 75% discount. My bad it's 75% discount, via OpenRouter.

I use the Claude-Max-20 ($200) plan. I manage to max it out 2 weeks. Planning to move to maybe multiple accounts.

I use C++ and Claude for big code-base.

sidcool 9 hours ago | parent | prev [-]

Antigravity?