Remix.run Logo
skapadia 5 hours ago

Claude Opus 4.5 by far is the most capable development model. I've been using it mainly via Claude Code, and with Cursor.

I agree anticompetitive behavior is bad, but the productivity gains to be had by using Anthropic models and tools are undeniable.

Eventually the open tools and models will catch up, so I'm all for using them locally as well, especially if sensitive data or IP is involved.

vercaemert 5 hours ago | parent | next [-]

I'd encourage you to try the -codex family with the highest reasoning.

I can't comment on Opus in CC because I've never bit the bullet and paid the subscription, but I have worked my way up to the $200/month Cursor subscription and the 5.2 codex models blow Opus out of the water in my experience (obviously very subjective).

I arrived at making plans with Opus and then implementing with the OpenAI model. The speed of Opus is much better for planning.

I'm willing to believe that CC/Opus is truly the overall best; I'm only commenting because you mentioned Cursor, where I'm fairly confident it's not. I'm basing my judgement on "how frequently does it do what I want the first time".

skapadia 4 hours ago | parent | next [-]

Thanks, I'll try those out. I've used Codex CLI itself on a few small projects as well, and fired it up on a feature branch where I had it implement the same feature that Claude Code did (they didn't see each other's implementations). For that specific case, the implementation Codex produced was simpler, and better for the immediate requirements. However, Claude's more abstracted solution may have held up better to changing requirements. Codex feels more reserved than Claude Code, which can be good or bad depending on the task.

eadwu 5 hours ago | parent | prev [-]

I've tried nearly all the models, they all work best if and only if you will never handle the code ever again. They suck if you have a solution and want them to implement that solution.

I've tried explaining the implementation word and word and it still prefers to create a whole new implementation reimplementing some parts instead of just doing what I tell it to. The only time it works is if I actually give it the code but at that point there's no reason to use it.

There's nothing wrong with this approach if it actually had guarantees, but current models are an extremely bad fit for it.

vercaemert 5 hours ago | parent | next [-]

Yes, I only plan/implement on fully AI projects where it's easy for me to tell whether or not they're doing the thing I want regardless of whether or not they've rewritten the codebase.

For actual work that I bill for, I go in with intructions to do minimal changes, and then I carefully review/edit everything.

That being said, the "toy" fully-AI projects I work with have evolved to the point where I regularly accomplish things I never (never ever) would have without the models.

teaearlgraycold 5 hours ago | parent | prev [-]

There are domains of programming (web front end) where lots of requests can be done pretty well even when you want them done a certain way. Not all, but enough to make it a great tool.

Uehreka 4 hours ago | parent | prev | next [-]

> Claude Opus 4.5 by far is the most capable development model.

At the moment I have a personal Claude Max subscription and ChatGPT Enterprise for Codex at work. Using both, I feel pretty definitively that gpt-5.2-codex is strictly superior to Opus 4.5. When I use Opus 4.5 I’m still constantly dealing with it cutting corners, misinterpreting my intentions and stopping when it isn’t actually done. When I switched to Codex for work a few months ago all of those problems went away.

I got the personal subscription this month to try out Gas Town and see how Opus 4.5 does on various tasks, and there are definitely features of CC that I miss with Codex CLI (I can’t believe they still don’t have hooks), but I’ve cancelled the subscription and won’t renew it at the end of this month unless they drop a model that really brings them up to where gpt-5.2-codex is at.

Der_Einzige 4 hours ago | parent [-]

I have literally the opposite experience and so does most of AI pilled twitter and the AI research community of top conferences (NeurIPS, ICLR, ICML, AAAI) Why does this FUD keep appearing on this site?

Edit: It's very true that the big 4 labs silently mess with their models and any action of that nature is extremely user hostile.

CamperBob2 3 hours ago | parent [-]

Probably because all of the major providers are constantly screwing around with their models, regardless of what they say.

skippyboxedhero 4 hours ago | parent | prev [-]

It feels very close to a trade-off point.

I agree with all posts in the chain: Opus is good, Anthropic have burned good will, I would like to use other models...but Opus is too good.

What I find most frustrating is that I am not sure if it is even actual model quality that is the blocker with other models. Gemini just goes off the rails sometimes with strange bugs like writing random text continuously and burning output tokens, Grok seems to have system prompts that result in odd behaviour...no bugs just doing weird things, Gemini Flash models seem to output massive quantities of text for no reason...it is often feels like very stupid things.

Also, there are huge issues with adopting some of these open models in terms of IP. Third parties are running these models and you are just sending them all your code...with a code of conduct promise from OpenRouter?

I also don't think there needs to be a huge improvement in models. Opus feels somewhat close to the reasonable limit: useful, still outputs nonsense, misses things sometimes...there are open models that can reach the same 95th percentile but the median is just the model outputting complete nonsense and trying to wipe your file system.

The day for open models will come but it still feels so close and so far.