| ▲ | lepuski 13 hours ago |
| I can't see why anyone still chooses Claude. Codex outperforms it in most respects, and its quotas are about ten times larger. A $100 Codex plan gets me through the whole week with 6–12 hours of coding per day. |
|
| ▲ | jjice 12 hours ago | parent | next [-] |
| I found GPT 5.5 is pretty solid, but I keep getting impressed by opus. It's tracked down some insane stuff while I look away during a meeting. 5.5 is way closer than previous OpenAI models to Anthropic IMO. These things are so tricky because everyone has a seemingly conflicting experience. Part of the fun I guess! |
|
| ▲ | SatvikBeri 12 hours ago | parent | prev | next [-] |
| I've never actually run into the issues that people talk about online, like Claude suddenly getting dumb or running out of usage. So there's just not a lot of incentive for me to shop around. I've used Amp a bit, and it's quite nice, but a bit more expensive without the subsidized subscription. |
| |
| ▲ | gardnr 12 hours ago | parent | next [-] | | Are you using Opus? Sonnet remains as useful as it was while Opus efficacy and token burn rate has soured over the last 4 months. | | |
| ▲ | fny 12 hours ago | parent | next [-] | | I'm using Opus on xhigh 10+ hours a day, and I've only reached 80% of weekly limits when doing massive ports or refactors. I haven't once hit hourly limits, and I've used Claude very, very aggressively. I guess its a pain point for power users. | | |
| ▲ | josephg 8 hours ago | parent [-] | | I sometimes run multiple claudes at the same time, with each terminal working on a different task. I have 2 going right now. Its very easy to burn through your quota if you work like that. Especially on high / xhigh. | | |
| ▲ | plufz 6 hours ago | parent [-] | | I used to be mostly at high/xhigh but now at medium I think it actually performs quite well both on results and token usage. |
|
| |
| ▲ | SatvikBeri 8 hours ago | parent | prev [-] | | Yes, I've pretty much used Opus exclusively for the last year, except for a brief period when Sonnet was ahead |
| |
| ▲ | raincole 12 hours ago | parent | prev | next [-] | | It has always been like this. We actually know that the model performance has been mostly steady[0], but you cannot beat the notion of "evil companies secretly serving us worse models." The meme value is too strong. [0]: https://marginlab.ai/trackers/claude-code/ | | |
| ▲ | mnicky 4 hours ago | parent [-] | | Hmm, today's pass rate raised to 73% - interesting, are they AB-testing some new model? This is too high for Opus 4.7. |
| |
| ▲ | mbreese 12 hours ago | parent | prev | next [-] | | When do you use it the most? I’ve noticed that it most often starts to degrade during 10-5 US East coast time. Late at night, I have the least amount of issues, but without fail, if I’m trying to do anything complex during the day, Claude gets loopy. | | | |
| ▲ | dboreham 12 hours ago | parent | prev [-] | | Same here. Works every time. Never ran into usage limits either. |
|
|
| ▲ | elahieh 13 hours ago | parent | prev | next [-] |
| One reason might be that Claude Opus 4.7 thinking benchmarks better on Arena Coding at https://arena.ai/leaderboard/text/coding ... hopefully that effectively assesses correctness. It doesn't account for reliability though. |
|
| ▲ | hansvm 12 hours ago | parent | prev | next [-] |
| Claude is the only AI coding tool I've found worth a damn. Without it I'd just do everything by hand save for a few bash scripts or whatever. |
| |
|
| ▲ | xboxnolifes 11 hours ago | parent | prev | next [-] |
| I certainly get more usage before cutoff from GPT 5.5, but the output I get from Opus 4.7 is way better. It just sucks that I get 2 good "long running" prompts on Opus 4.7 before my daily quota is met on the $20 subscription. |
|
| ▲ | Thaxll 13 hours ago | parent | prev | next [-] |
| I think it's impossible to say that codex x.y.z is better than Sonnet x.y.z, I used many "high" end models and they're just all good. |
|
| ▲ | kylemaxwell 12 hours ago | parent | prev | next [-] |
| Corporate policies and agreements. In large corporations, using external non-approved models with proprietary source code is a good way to have significant career issues. |
|
| ▲ | SeanAnderson 12 hours ago | parent | prev | next [-] |
| You get a discount for paying for a full year on Teams and Enterprise can involve contractual obligations. It's a lot of effort to get buy-in to change providers and to shift an entire organization. The winds change frequently in this space and the pain needs to get to a certain level before it's worth rolling the dice. |
|
| ▲ | taspeotis 12 hours ago | parent | prev | next [-] |
| Claude Max 20x gives me unlimited (for my level of usage) Opus 4.7 - how much money do I have pay OpenAI for that? |
| |
| ▲ | arcanemachiner 12 hours ago | parent [-] | | Based on the experience of people using the $20 Claude Pro subscription and exhausting their quotas in a manner of minutes, the answer to your question is probably "less". (I would guess that the $100 plan would do the trick.) | | |
|
|
| ▲ | atraac 5 hours ago | parent | prev | next [-] |
| But 100$ Claude subscription also gets me easily entire week of coding 6-8 hours a day? What on earth do you do to run out of limits on Max? Do you vibe multiple new codebases every day for a living? The benefit of Claude is also not gaslighting me every time I tell it it's wrong. |
|
| ▲ | CompoundEyes 12 hours ago | parent | prev | next [-] |
| In my org the teams doing agent engineering at scale are all on Codex using gpt-5.5. By scale I mean fully agent authored code workflows with long running / multi hour plans. |
|
| ▲ | etchalon 12 hours ago | parent | prev | next [-] |
| I'd rather not give money to Sam Altman. |
| |
| ▲ | beering 8 hours ago | parent [-] | | with Anthropic you’re giving money to Elon Musk. Seems like a pick-your-billionaire world we’re in now |
|
|
| ▲ | wahnfrieden 8 hours ago | parent | prev | next [-] |
| Claude is (per benchmarks) much worse at instruction following, but is more charming and deceptive and anthropomorphized by default (in name and image), leading to productivity assessment psychosis |
|
| ▲ | squirrellous 12 hours ago | parent | prev | next [-] |
| Corporate reasons. AWS hasn't opened codex models to everyone yet. |
|
| ▲ | echelon 13 hours ago | parent | prev | next [-] |
| Claude is significantly better at Rust in my experience, and Rust is my favorite language to emit from LLMs. Opus 4.7 + Rust is a killer combo. |
|
| ▲ | yieldcrv 12 hours ago | parent | prev | next [-] |
| because my shard isn’t erroring I use Codex when Claude Code is down, and I only began using Claude when ChatGPT was down yes codex is very fast, I go back to Claude for now |
|
| ▲ | nothinkjustai 13 hours ago | parent | prev [-] |
| Because of marketing and vibes mostly. Heck I prefer DeepSeek to both of those. |
| |
| ▲ | mcv 3 hours ago | parent | next [-] | | I feel you. I'd prefer to stick entirely with local open source models. I tried using Aider and Qwen last week, and while it's still impressive what it can do with just local resources and entirely for free, its error rate is too high, and it's clearly not remotely in the same league as Claude Code. | |
| ▲ | josephg 12 hours ago | parent | prev | next [-] | | Wow, I'm really surprised. I tried deepseek (their best model, through the official API). Its extremely cheap, but its clearly not as good at programming as Opus 4.7. It seems nowhere near as good at making high level design choices. Deepseek also seems to get stuck in whack-a-mole fixing loops much more than opus. I stopped it at one point, and asked opus to solve the problem it was trying to solve and it saw the solution immediately. I was running deepseek through claude's code agent harness. Maybe it works better through a different tool? | | |
| ▲ | zmmmmm 12 hours ago | parent | next [-] | | I've given V4 Pro some curly things and I was impressed at how it figured them out. I agree high level design is not its forte. But it sat in a loop and dogmatically debugged a crazy dependency issue to come to the right answer over the course of 15 minutes which impressed me. | |
| ▲ | nothinkjustai 10 hours ago | parent | prev | next [-] | | Idk, I don’t vibe code so even the flash model is great for generating code for myself. I tend to do the planning and design myself though. Harness also matters, and also provider. I was using openrouter and switched to the Deepseek api and suddenly all the tool call issues I was having resolved themselves. Flash is so damn fast at doing stuff like generating boilerplate I can’t go back to the bigger slower models. | |
| ▲ | esafak 12 hours ago | parent | prev [-] | | You tried v4? | | |
| ▲ | codybontecou 12 hours ago | parent | next [-] | | I tried to like it, but it eventually got stuck in a near-infinite loop trying to debug an extra curly bracket in an iOS app. That and the lack of image-read support surprised me. I'm a big fan of feeding screenshots into my llm and that killed it for me. | |
| ▲ | josephg 12 hours ago | parent | prev [-] | | Yeah, v4. I would have been much more impressed with v4 about 6 months ago. But I've been spoiled by opus 4.7. Deepseek isn't at the same level. |
|
| |
| ▲ | zmmmmm 13 hours ago | parent | prev [-] | | interestingly I had the same experience, and weirdly it's in part because it is clearly less intelligent. It's more of a mechanistic tool just doing what I ask (but still very smart and very competent about it) and less trying to win a nobel prize with each answer. Turns out I actually like that. |
|