Remix.run Logo
lemming 4 hours ago

Obviously Anthropic are within their rights to do this, but I don’t think their moat is as big as they think it is. I’ve cancelled my max subscription and have gone over to ChatGPT pro, which is now explicitly supporting this use case.

manquer 4 hours ago | parent | next [-]

Is opencode that much better than Codex / Claude Code for cli tooling that people are prepared forsake[1] Sonnet 4.5/Opus 4.5 and switch to GPT 5.2-codex ?

The moat is Sonnet/Opus not Claude Code it can never be a client side app.

Cost arbitrage like this is short lived, until the org changes pricing.

For example Anthropic could release say an ultra plan at $500-$1000 with these restrictions removed/relaxed that reflects the true cost of the consumption, or get cost of inference down enough that even at $200 it is profitable for them and they will stop caring if higher bracket does not sell well, Then $200 is what market is ready to pay, there will be a % of users who will use it more than the rest as is the case in any software.

Either way the only money here i.e. the $200(or more) is only going to Anthropic.

[1] Perceived or real there is huge gulf in how Sonnet 4.5 is seen versus GPT 5.2-codex .

lemming 43 minutes ago | parent | next [-]

I’ve used both Claude and Codex extensively, and I already preferred Codex the model. I didn’t like the harness, but recently pi got good enough to be my daily driver, and I’ve since found that it’s much better than either CC or Codex CLI. It’s OSS, very simple and hackable, and the extension system is really nice. I wouldn’t want to go back to Claude Code even if I were convinced the model were much better - given that I already preferred the alternative it’s a no-brainer. OpenAI have officially allowed the use of pi with their sub, so at least in the short term the risk of a rug pull seems minimal.

mixto 30 minutes ago | parent [-]

What is pi?

threecheese 3 hours ago | parent | prev [-]

The combination of Claude Code and models could be a moat of its own; they are able to use RL to make their agent better - tool descriptions, reasoning patterns, etc.

Are they doing it? No idea, it sounds ridiculously expensive; but they did buy Bun, maybe to facilitate integrating around CC. Cowork, as an example, uses CC almost as an infrastructure layer, and the Claude Agent SDK is basically LiteLLM for your Max subscription - also built on/wrapping the CC app. So who knows, the juice may be worth the RL squeeze if CC is going to be foundational to some enterprise strategy.

Also IMO OpenCode is not better, just different. I’m getting great results with CC, but if I want to use other models like GLM/Qwen (or the new Nvidia stuff) it’s my tool of choice. I am really surprised to see people cancelling their Max subscriptions; it looks performative and I suspect many are not being honest.

manquer 2 hours ago | parent [-]

Why would they not be use RL to learn if its OpenCode instead of Claude Code?

The tool calls,reasoning etc are still sent, tracked and used by Anthropic, the model cannot function well without that kind of detail.

OpenCode also get more data if they to train their own model with, however at this point only few companies can attempt to do foundational model training runs so I don't think the likes of Anthropic is worried about a small player also getting their user data.

---

> it looks performative and I suspect many are not being honest.

Quite possible if they were leveraging the cost arbitrage i.e. the fact at the actual per token cost was cheaper because of this loophole. Now their cost is higher, they perhaps don't need/want/value the quality for the price paid, so will go to Kimi K2/ Grok Code/ GLM Air for better pricing, basically if all you value is cost per token this change is reason enough to switch.

These are kind of users Anthropic perhaps doesn't want. Somewhat akin to Apple segmenting and not focusing on the budget market.

CGamesPlay an hour ago | parent | prev | next [-]

Honestly, I'm a big Claude Code fan, even despite how bad its CLI application is, because it was so much better than other models. Anthropic's move here pretty much signals to me that the model isn't much better than other models, and that other models are due for a second chance.

If their model was truly ahead of the game, they wouldn't lock down the subsidized API in the same week they ask for 5-year retention on my prompts and permission to use for training. Instead, they would have been focusing on delivering the model more cheaply and broadly, regardless of which client I use to access it.

kroaton 3 hours ago | parent | prev [-]

I hope the upcoming DeepSeek coding model puts a dent in Anthropic’s armor. Claude 4.5 is by far the best/fastest coding model, but the company is just too slimy and burning enough $$$ to guarantee enshitification in the near future.

Tostino 2 hours ago | parent [-]

I get way better results from Gemini fwiw.