Remix.run Logo
pxtail 4 hours ago

Recently after noticing how quickly limits are consumed and reading others complaints about same issue on reddit I was wondering how much about this is real error or bug hidden somewhere and how much it's about testing what threshold of constraining limits will be tolerated without cancelling accounts. Eventually, in case of "shit hits the fan" situation it can be always dismissed by waving hands and apologizing (or not) about some abstract "bug".

The lack of transparency and accountability behind all of this is incredible in my perception.

vintagedave 3 hours ago | parent | next [-]

I've run into this, and I highly doubt I am one of the more extraordinary users. I have delays between working with it, don't have many running at once, am running on smaller codebases, etc. Yet just a few minutes ago I hit a quota. In the past I did far more work with it without running into the quota.

I emailed their support a few days ago with details, concerns, a link to the twitter thread from one of their employees, and a concrete support request, which had an AI agent ('Fin') tell me:

> While our Support team is unable to manually reset or work around usage limits, you can learn about best practices here. If you’ve hit a message limit, you’ll need to wait until the reset time, or you can consider purchasing an upgraded plan (if applicable).

I replied saying that was not an appropriate answer.

You're absolutely right re the lack of transparency and accountability. On one hand, Anthropic generates good will by appearing to have a more ethical stance then OpenAI, and a better product. On the other hand, they kill it fast through extremely poor treatment of their customers.

If they have a bug, they need to resolve it: and in the meantime refund quotas. 'Unable to' - that's shocking. This is simple and reasonable. It's basic customer service. I don't know if they realise the damage their attitude is doing.

Kim_Bruning 3 hours ago | parent [-]

Fin is the most useless thing ever. There's no obvious way to get reports in front of a human in a timely manner, and there's no clue to believe fin interactions are retained.

This does mean ultimately no loyalty. I can't stay loyal to a brand that doesn't actually respond to inquiries, bug reports or down reports at all.

I do understand that Anthropic is operating at a tremendous scale and can't have enough humans in the loop. This sounds like a good use for ai classification and triage, really!

foxyv an hour ago | parent | prev | next [-]

I suspect that Claude had a bug that undercounted tokens and they fixed it.

mmmlinux a few seconds ago | parent [-]

I wonder if that was why they were offering the bonus off hours limits. Ease people in to the transition.

JambalayaJimbo 4 hours ago | parent | prev | next [-]

Once you get used to using claude as an abstraction layer you start getting pretty reckless with it.

My organization has the concept of "premium models" where our limits reset every month. I hit my limit pretty quickly last month because I was burning tokens doing things that would have been a simple bash loop in the past - all because I was used to interfacing with Claude at the chat layer for all my automation needs and not thinking any more about it.

devmor 4 hours ago | parent [-]

This is a real danger that I think a lot of people will run into as prices go up more and more in the future.

Completely outside of the productivity debate, offloading cognitive tasks to LLMs leaves you less practiced in them and less ready to do them when the LLM isn't available. When you have to delegate only certain tasks to the LLM for financial reasons, you may find yourself very frustrated.

tjoff 41 minutes ago | parent | prev | next [-]

Working as intended? They openly state that how quickly your limit is reached depends on many factors (that you don't know) as well as current load on their systems.

Could just be that usage has gone up.

joshuafuller 4 hours ago | parent | prev | next [-]

This feels a lot like the same playbook we’re seeing with dynamic pricing in retail, just applied to compute instead of products. You never really know what you’re getting, and the rules shift under you.

What makes it worse is the lack of transparency. If there were clear, hard limits, people could plan around it. Instead it’s this moving target that makes it impossible to trust for real work.

At some point it stops feeling like a bug and starts feeling like a pricing experiment on users.

bayarearefugee 4 hours ago | parent | next [-]

The clear trend over the past decade or so has been using analytics and data gathering to extract maximum rents from every customer in every industry and AI is going to massively accelerate this.

The only way out is government regulation which means we are screwed in the US (our government is too far gone to represent average citizen interests in any meaningful way) but Europeans maybe have a chance if they get it together and demand change.

tartoran 4 hours ago | parent | prev [-]

What a horrid glimpse in the future. I hope we won't get there and we all collectively fight back with our wallets.

Tade0 4 hours ago | parent [-]

I'm worried that the present is actually living off a line of credit that will be spent/closed soon.

nicce 4 hours ago | parent | prev | next [-]

Are they going to pay back if subscription was payed but token limit was less than advertised? Is there some tiny text somewhere preventing just suing or pulling money back with credit cards?

jadar 4 hours ago | parent [-]

Part of the issue is that they don't actually advertise what the token limit is. Just some vague, "this is 5x more than free, and 5x more than pro". They seem to be free to change the basis however they please, because most of us are more than happy to use what they give us at the discounted subscription pricing.

thisisit 3 hours ago | parent | prev [-]

They keep running experiments like free $50 in extra use credits or 2x usage outside certain windows where inference is very slow. You can’t help but think this is all a slowly boiling the frog experiment. Experimenting how much they can charge.