Remix.run Logo
cedws 8 hours ago

I had a weird experience at work last week where Claude was just thinking forever about tasks and not actually doing anything. It was unusable. The next day it was fine again.

mstank 7 hours ago | parent | next [-]

That happens to me all the time. My current working theory is when their servers are hammered there is a queueing system that invisible to end-users.

jatora 6 hours ago | parent [-]

i was having this issue yesterday. the same prompt would send it into a loop where it would appear to be doing nothing for 30+ minutes until i cancelled it. it would show 400 tokens used and thats it.

I tested on a previous version (2.1.68) and it still ran into this neverending loop BUT at least the token count kept steadily increasing.

So we are seeing 1. some sort of model degredation is my guess (why it can't break a thinking loop on some problems), as well as 2. a clear drop in thinking token UI transparency.

cjonas 8 hours ago | parent | prev | next [-]

Ya I've had this experience more than a few times recently. I've heard people claiming they are serving quantized models during high loads, but it happens in cursor as well so I don't think it's specific to Anthropics subscription. It could be that the context window has just gotten into a state that confuses the model... But that wouldn't explain why it appears to be temporary...

My best guess is this is the result of the companies running "experiments" to test changes. Or it's just all in my head :)

whywhywhywhy 7 hours ago | parent [-]

Cursor one is back to Claude 4 or 3.5+ at best. Struggles to do things it did effortlessly a few weeks ago.

It’s not under load either it’s just fully downgraded. Feels more they’re dialing in what they can get away with but are pushing it very far.

sunaookami 7 hours ago | parent | prev | next [-]

Set MAX_THINKING_TOKENS to 0, Claude's thinking hardly does anything and just wastes tokens. It actually often performs worse than without thinking.

gruez 7 hours ago | parent [-]

Not the guy you're responding to, but when this happens the token counter is frozen at some low value (eg. 1k-10k) value as well, so it's not thinking in circles but rather not thinking (or doing anything, for that matter) at all.

jatora 6 hours ago | parent | next [-]

i was having this issue yesterday. the same prompt would send it into a loop where it would appear to be doing nothing for 30+ minutes until i cancelled it. it would show 400 tokens used and thats it. I tested on a previous version (2.1.68) and it still ran into this neverending loop BUT at least the token count kept steadily increasing.

So we are seeing 1. some sort of model degredation is my guess (why it can't break a thinking loop on some problems), as well as 2. a clear drop in thinking token UI transparency

when i left it running overnight it finally sent a message saying it exceeded the 64000 output token limit

egeozcan 7 hours ago | parent | prev [-]

This exact thing is happening to me since yesterday. It comes back to life when I throw the whole session away.

freedomben 7 hours ago | parent | prev [-]

This happened to me as well! It was especially infuriating because I had just barely upgraded to the $200 per month plan because I exhausted my weekly quota. Then the entire next day was a complete bust because of this issue. I want my money back!

cedws 7 hours ago | parent [-]

What day was it?

freedomben 7 hours ago | parent [-]

Thursday starting mid to late morning, and ended Friday night (US timezone).

cedws 7 hours ago | parent [-]

Same day then. It was happening for me roughly between 9am-5pm BST time.