Remix.run Logo
furyofantares 5 days ago

Some of this has gotta be people asking more of it than they did before, and some has gotta be people who happened to use it for things it's good at to begin with and are now asking it things it's bad at (not necessarily harder things, just harder for the model).

However there have been some bugs causing performance degradation acknowledged by Anthropic as well (and fixed) and so I would guess there's a good amount of real degradation still if people are still seeing issues.

I've seen a lot of people switching to codex cli, and yesterday I did too, for now my 200/mo goes to OpenAI. It's quite good and I recommend it.

rapind 5 days ago | parent [-]

What makes it particularly tricky to evaluate is that there could still be other bugs given how long these went without even acknowledgement until now, and they did state they are still looking into potential Opus issues.

I'll probably come back and try a Claude Code subscription again, but I'm good for the time being with the alternative I found. I also kind of suspect the subscription model isn't going to work for me long term and instead the pay per use approach (possibly with reserved time like we have for cloud compute) where I can swap models with low friction is far more appealing.

data-ottawa 5 days ago | parent [-]

Benchmarks are too expensive for ordinary users to run, but it would be useful if they could publish their benchmarks using prod over time, that would expose degradations in a more objective manner.

Of course there’s always the problem of teaching to the test and out of test degradations, but presumably bugs would be independent of that.

rapind 5 days ago | parent [-]

A few weeks ago reddit was on fire with outages and timeouts and yet the Anthropic Jira status page was showing everything as green. So even if they had benchmarks, I'm not sure they'd be transparent with them.