| ▲ | el_benhameen 5 days ago |
| This is how I feel about the 100 msg/wk limit on o3 for the ChatGPT plus plan. There’s no way to see how much I’ve used, and it’s an important enough resource that my lizard brain wants to hoard it. The result is that I way underutilize my plan and go for one of the o4-mini models instead. I would much prefer a lower daily limit, but maybe the underutilization is the point of the weekly limit. *edited to change “pro” to “plus” |
|
| ▲ | landl0rd 5 days ago | parent | next [-] |
| You can tell how it’s intentional with both OpenAI and Anthropic by how they’re intentionally made opaque. I cant see a nice little bar with how much I’ve used versus have left on the given rate limits so it’s pressuring users to hoard. Because it prevents them from budgeting it out and saying “okay I’ve used 1/3 of my quota and it’s Wednesday, I can use more faster.” |
| |
| ▲ | xpe 5 days ago | parent | next [-] | | > pressures users to hoard As a pedantic note, I would say 'ration'. Things you hoard don't magically go away after some period of time. | | |
| ▲ | zamadatix 4 days ago | parent | next [-] | | FWIW neither hoard nor ration imply anything about permanence of the thing to me. Whether you were rationed bread or you hoarded bread, the bread isn't going to be usable forever. At the same time whether you were rationed sugar or hoarded sugar, the sugar isn't going to expire (with good storage). Rationed/hoarded do imply, to me, something different about how the quantity came to be though. Rationed being given or setting aside a fixed amount, hoarded being that you stockpiled/amassed it. Saying "you hoarded your rations" (whether they will expire) does feel more on the money than "you ration your rations" from that perspective. I hope this doesn't come off too "well aktually", I've just been thinking about how I still realize different meanings/origins of common words later in life and the odd things that trigger me to think about it differently for the first time. A recent one for me was that "whoever" has the (fairly obvious) etymology of who+ever https://www.etymonline.com/word/whoever vs something like balloon, which has a comparatively more complex history https://www.etymonline.com/word/balloon | | |
| ▲ | mattkrause 4 days ago | parent | next [-] | | For me, the difference between ration and hoard is the uhh…rationality of the plan. Rationing suggests a deliberate, calculated plan: we’ll eat this much at these particular times so our food lasts that long. Hoard seems more ad hoc and fear-driven: better keep yet another beat-up VGA cable, just in case. | | |
| ▲ | jjani 4 days ago | parent [-] | | > Hoard seems more ad hoc and fear-driven: better keep yet another beat-up VGA cable, just in case. Counterexample: animals hoarding food for winter time, etc. | | |
| ▲ | nothrabannosir 4 days ago | parent [-] | | Rather a corroborating example than a counter, if you believe how many nuts squirrels lose sight of after burying them. | | |
| ▲ | xpe 4 days ago | parent [-] | | Exactly. How many random computer dongles and power supplies get buried in sundry boxes that are effectively lost to the world? |
|
|
| |
| ▲ | kanak8278 4 days ago | parent | prev | next [-] | | It can only happens in HackerNews that people talking about Claude Code limit, can start discussing what is a better a word for explaining it. :-) I just love this community for these silly things. | |
| ▲ | 14123newsletter 4 days ago | parent | prev | next [-] | | Isn't hoarding means you can get more bread ? While rationing means: "here is 1kg, use it however you want but you can't get more". | | |
| ▲ | zamadatix 4 days ago | parent [-] | | Hoarding doesn't really imply how you got it, just that you stockpile once you do. I think you're bang on rationing - it's about assigning the fixed amount. The LLM provider does the rationing, the LLM user hoards their rations. One could theoretically ration their rations out further... but that would require knowing the usage to the point to set the remaining fixed amounts - which is precisely whT's missing in the interface. |
| |
| ▲ | randomcarbloke 4 days ago | parent | prev [-] | | Bread can be rationed but cannot be hoarded. |
| |
| ▲ | nine_k 4 days ago | parent | prev | next [-] | | Rationing implies an ability to measure: this amount per day. But measuring the remaining amount is exactly what Claude Code API does not provide. So, back to hoarding. | |
| ▲ | landl0rd 4 days ago | parent | prev [-] | | Rationing is precisely what we want to do: I have x usage this week; let me determine precisely how much I can use without going over. Hoarding implies a less reasoned path of “I never know when I might run out so I must use as little as possible, save as much as I can.” One can hoard gasoline but it still expires past a point. |
| |
| ▲ | sothatsit 4 days ago | parent | prev | next [-] | | Anthropic also does this because they will dynamically change the limits to manage load. Tools like ccusage show you how much you've used and I can tell sometimes that I get limited with significantly lower usage than I would usually get limited for. | | |
| ▲ | TheOtherHobbes 4 days ago | parent [-] | | Which is a huge problem, because you literally have no idea what you're paying for. One day a few of hours of prompting is fine, another you'll hit your weekly limit and you're out for seven days. While still paying your subscription. I can't think of any other product or service which operates on this basis - where you're charged a set fee, but the access you get varies from hour to hour entirely at the provider's whim. And if you hit a limit which is a moving target you can't even check you're locked out of the service. It's ridiculous. Begging for a law suit, tbh. | | |
| ▲ | lukaslalinsky 4 days ago | parent [-] | | What happens when you have a gym membership, but you go there during their busy hours? What they could do is pay as you go, with pricing increasing with the demand (Uber style), but I don't think people would like that much. | | |
| ▲ | deeth_starr_v 4 days ago | parent [-] | | Your analogy would work if the gym would randomly suspend your membership for a week if you worked out too much during peak hours |
|
|
| |
| ▲ | canada_dry 4 days ago | parent | prev | next [-] | | OpenAI's "PRO" subscription is really a waste of money IMHO for this and other reasons. Decided to give PRO a try when I kept getting terrible results from the $20 option. So far it's perhaps 20% improved in complex code generation. It still has the extremely annoying ~350 line limit in its output. It still IGNORES EXPLICIT CONTINUOUS INSTRUCTIONS eg: do not remove existing comments. The opaque overriding rules that - despite it begging forgiveness when it ignores instructions - are extremely frustrating!! | | |
| ▲ | JoshuaDavid 4 days ago | parent | next [-] | | One thing that has worked for me when I have a long list of requirements / standards I want an LLM agent to stick to while executing a series of 5 instructions is to add extra steps at the end of the instructions like "6. check if any of the code standards are not met - if not, fix them and return to step 5" / "7. verify that no forbidden patterns from <list of things like no-op unit tests, n+1 query patterns, etc> exist in added code - if you find any, fix them and return to step 5" etc. Often they're better at recognizing failures to stick to the rules and fixing the problems than they are at consistently following the rules in a single shot. This does mean that often having an LLM agents so a thing works but is slower than just doing it myself. Still, I can sometimes kick off a workflow before joining a meeting, so maybe the hours I've spent playing with these tools will eventually pay for themselves in improved future productivity. | |
| ▲ | jmaker 4 days ago | parent | prev [-] | | There are things it’s great at and things it deceives you with. In many things I needed it to check something for me I knew was a problem, o3 kept insisting it were possible due to reasons a,b,c, and thankfully gave me links. I knew it used to be a problem so surprised I followed the links only to read black on white it still wasn’t. So I explained to o3 that it’s wrong. Two messages later we were back at square one. One week later it didn’t update its knowledge. Months later it’s still the same. But at things I have no idea about like medicine it feels very convincing. Am I in hazard? People don’t understand Dunning-Kruger. People are prone to biases and fallacies. Likely all LLMs are inept at objectivity. My instructions to LLMs are always strictness, no false claims, Bayesian likelihoods on every claim. Some modes ignore the instructions voluntarily, while others stick strictly to them. In the end it doesn’t matter when they insist on 99% confidence on refuted fantasies. | | |
| ▲ | namibj 4 days ago | parent [-] | | The problem is that all current mainstream LLMs are autoregressive decoder-only, mostly but not exclusively transformers.
Their math can't apply modifiers like "this example/attempt there is wrong due to X,Y,Z" to anything that came before the modifier clause in the prompt.
Despite how enticing these models are to train, these limitations are inherent. (For this specific situation people recommend going back to just before the wrong output and editing the message to reflect this understanding, as the confidently wrong output with no advisory/correcting pre-clause will "pollute the context": the model will look at the context for some aspects coded into high(-er)-layer token embeddings, inherently can't include the correct/wrong aspect because we couldn't apply the "wrong"/correction to the confidently-wrong tokens, thus retrieves the confidently-wrong tokens, and subsequently spews even more BS.
Similar to how telling a GPT2/GPT3 model it's an expert on $topic made it actually be better on said topic, this affirmation of that the model made an error will prime the model to behave in a way that it gets yelled at again... sadly.) |
|
| |
| ▲ | brookst 4 days ago | parent | prev [-] | | I think the simple product prioritization explanation makes way more sense than a a second-order conspiracy to trick people into hoarding. Reality is probably that there’s a backlog item to implement a view, but it’s hard to prioritize over core features. | | |
| ▲ | parineum 4 days ago | parent | next [-] | | > Reality is probably that there’s a backlog item to implement a view, but it’s hard to prioritize over core features. It's even harder to prioritize when the feature you pay to develop probably costs you money. | |
| ▲ | Zacharias030 4 days ago | parent | prev [-] | | I hear OpenAI and Anthropic are making tools that are supposedly pretty good at helping with creating a view from a backlog. Back to the conspiracy ^^ | | |
| ▲ | brookst 4 days ago | parent | next [-] | | If you’re on HN you’ve probably been around enough to know it’s never that simple. You implement the counter, now customer service needs to be able to provide documentation, users want to argue, async systems take hours to update, users complain about that, you move the batch accounting job to sync, queries that fail still end up counting, and on and on. They should have an indicator, for sure. But I at least have been around the block enough to know that declaring “it would be easy” for someone else’s business and tech stack is usually naive. | |
| ▲ | bluelightning2k 4 days ago | parent | prev [-] | | "We were going to implement a counter but hit out weekly Claude code limit before we could do it. Maybe next week? Anthropic." |
|
|
|
|
| ▲ | hinkley 4 days ago | parent | prev | next [-] |
| > it’s an important enough resource that my lizard brain wants to hoard it. I have zero doubt that this is working exactly as intended. We will keep all our users at 80% of what we sold them by keeping them anxious about how close they are to the limit. |
|
| ▲ | oc1 5 days ago | parent | prev | next [-] |
| They know this psychology. This dark pattern is intentional so you will use their costly service less. |
| |
| ▲ | hn_throwaway_99 5 days ago | parent [-] | | I don't think this counts as a "dark pattern". The reality is that these services are resource constrained, so they are trying to build in resource limits that are as fair as possible and prevent people from gaming the system. | | |
| ▲ | const_cast 4 days ago | parent | next [-] | | The dark pattern isn't the payment pattern, that's fine. The dark pattern is hiding how much you're using, thereby tricking the human lizard brain into irrationally fearing they are running out. The human brain is stupid and remarkably exploitable. Just a teensy little bit of information hiding can illicit strange and self-destructive behavior from people. You aren't cut off until you're cut off, then it's over completely. That's scary, because there's no recourse. So people are going to try to avoid that as much as possible. Since they don't know how much they're using, they're naturally going to err on the side of caution - paying for more than they need. | | |
| ▲ | gorbypark 4 days ago | parent [-] | | I'm only on the $20 Pro plan, and I'm a big users of the /clear command. I don't really use Claude Code that much either, so the $20 plan is perfect for me. However, a few times I've gotten the "approaching context being full, auto compact coming soon" thing, so I manually do /compact and I run out of the 5hr usage window while compacting the context. It's extremely infuriating because if I could have a view into how close I was to being rate limited in the 5 hour window, I might make a different choice as to compact or finish the last little thing I was working on. |
| |
| ▲ | aspenmayer 4 days ago | parent | prev | next [-] | | > prevent people from gaming the system If I sit down for dinner at an all-you-can-eat buffet, I get to decide how much I’m having for dinner. I don’t mind if they don’t let me take leftovers, as it is already understood that they mean as much as I can eat in one sitting. If they don’t want folks to take advantage of an advertised offer, then they should change their sales pitch. It’s explicitly not gaming any system to use what you’re paying for in full. That’s your right and privilege as that’s the bill of goods you bought and were sold. | | |
| ▲ | Wowfunhappy 4 days ago | parent [-] | | I feel like using Claude Code overnight while you sleep or sharing your account with someone else is equivalent to taking home leftovers from an all-you-can-eat buffet. I also find it hard to believe 5% of customers are doing that, though. | | |
| ▲ | aspenmayer 4 days ago | parent | next [-] | | If that’s off-peak time, I’d argue the adjacent opposite point, that Anthropic et al could implement deferred and/or scheduled jobs natively so that folks can do what they’re going to do anyway in a way that comports with reasonable load management that all vendors must do. For example, I don’t mind that Netflix pauses playback after playing continuously for a few episodes of a show, because the options they present me with acknowledge different use cases. The options are: stop playing, play now and ask me again later, and play now and don’t ask me again. These options are kind to the user because they don’t disable the power user option. | | |
| ▲ | gorbypark 4 days ago | parent [-] | | Is there really an off peak time, though? I think Anthropic is running on AWS with the big investment from Amazon, right? I'm sure there's some peaks and valleys but with the Americas, Europe and Asia being in different time zones I'd expect there'd be a somewhat "baseline" usage with peaks where the timezones overlap (European afternoons and American mornings, for example). I know in my case I get the most 503 overloaded errors in the European afternoon. |
| |
| ▲ | closewith 4 days ago | parent | prev | next [-] | | I use Claude Code with Opus four days a week for about 5 hours a day. I've only once hit the limit. Yet the tool others mentioned here (ccusage) indicates I used about $120 in API equivalents per day or about $1,800 to date this month on a $200 subscription. That has to be a loss leader for Anthropic that they now want to wind back. I also wouldn't consider my usage extreme. I never use more than one instance, don't run overnight, etc. | |
| ▲ | kelnos 4 days ago | parent | prev | next [-] | | I think this is just a bad analogy. I've definitely set Claude Code on a task and then wandered off to do something else, and come back an hour or so later to see if it's done. If I'd chosen to take a nap, would you say I'm "gaming the system"? That's silly. I'm using an LLM agent to free up my own time; it's up to me to decide what I do with that time. | | |
| ▲ | Wowfunhappy 4 days ago | parent [-] | | No, this doesn't sound like gaming the system to me. However, if you were using a script to automatically queue up tasks so they can run as soon as your 5-hour-session expires to ensure you're using Claude 24/7, that's a different story. A project like this was posted to HN relatively recently. As I said, I have trouble believing this constitutes 5% of users, but it constitutes something and yeah, I feel Anthropic is justified in putting a cap on that. | | |
| ▲ | yunohn 4 days ago | parent [-] | | They always have had a “soft” sessions limit per month anyway, so it still doesn’t make sense to limit weekly. |
|
| |
| ▲ | benterix 4 days ago | parent | prev [-] | | I use Claude Code overnight almost exclusively, it's simply not worth my time during the day. It's just easier to prepare precise instructions, let it run and check the results in the morning. If it goes awry (it usually does), I can modify the instructions and start from scratch, without getting too attached to it. |
|
| |
| ▲ | hshdhdhj4444 4 days ago | parent | prev | next [-] | | Besides the click mazes to unsubscribe I’m struggling to think of a darker pattern than having usage limits but not showing usage. The dark pattern isn’t the usage limit. It’s the lack of information about current and remaining usage. | |
| ▲ | Timwi 4 days ago | parent | prev [-] | | The dark pattern is not telling users how much they've used so they can't plan or ration. |
|
|
|
| ▲ | sitkack 5 days ago | parent | prev | next [-] |
| Working as Intended. |
| |
| ▲ | Wowfunhappy 5 days ago | parent [-] | | Well, kind of. If you don't use it at all you're going to unsubscribe. This isn't like a gym membership where people join aspirationally. No one's new year's resolution is "I'm going to use o3 more often." | | |
| ▲ | mattigames 5 days ago | parent | next [-] | | Yes it is, in the way of "I'm gonna work on X thing that is now much easier thanks to chatGPT" and then never work on it due lack of time or motivation or something else. | |
| ▲ | christina97 5 days ago | parent | prev [-] | | What makes you think it’s any different? |
|
|
|
| ▲ | gfiorav 5 days ago | parent | prev | next [-] |
| I nervously hover over the VSCode Copilot icon, watching the premium requests slowly accumulate. It’s not an enjoyable experience (whether you know how much you've used or not :) ) |
| |
| ▲ | benjiro 4 days ago | parent [-] | | Noticed that my productive usage of CoPilot dropped like a brick, after they introduced those limits. You feel constantly on the clock, and being forced to constantly change models gets tiresome very fast. Unless you use "free" GPT 4.1 like MS wants you (not the same as Claude, even with Beast Mode). And how long is that going to be free, because it feels like a design to simply push you to a MS product (MS>OpenAI) instead of third party. So what happens a year from now? Paid GPT 5.1? With 4.1 being removed? If it was not for the insane prices of actual large mem GPUs and the slowness of large models, i will be using LLMs at home. Right now MS/Antropic/OpenAI are right in that zone where its not too expensive yet to go full local LLM. |
|
|
| ▲ | milankragujevic 5 days ago | parent | prev | next [-] |
| Where did you find this info? I am unable to find in on OpenAI's website. https://help.openai.com/en/articles/6950777-what-is-chatgpt-... I haven't yet run into this limit... |
| |
|
| ▲ | wiseowise 4 days ago | parent | prev | next [-] |
| > There’s no way to see how much I’ve used Hover on it on a desktop, it’ll show how many requests you have left. |
|
| ▲ | littlestymaar 5 days ago | parent | prev | next [-] |
| > This is how I feel about the 100 msg/wk limit on o3 for the ChatGPT Do I read this correctly? Only 100 messages per week, on the pro plan worth a few hundred buck a month?! |
| |
| ▲ | CSMastermind 5 days ago | parent | next [-] | | That's definitely not correct because I'm on the pro plan and make extensive use of o3-pro for coding. I've sent 100 messages in a single day with no limitation. Per their website: https://help.openai.com/en/articles/9793128-what-is-chatgpt-... There are no usage caps on pro users (subject to some common sense terms of use). | |
| ▲ | mhl47 5 days ago | parent | prev | next [-] | | No it's 100 a week for plus users. | | | |
| ▲ | doorhammer 5 days ago | parent | prev [-] | | I think it’s just a mistype I have a pro plan and I hammer o3–I’d guess more than a hundred a day sometimes—and have never run into limits personally Wouldn’t shock me if something like that happened but haven’t seen evidence of it yet |
|
|
| ▲ | _giorgio_ 4 days ago | parent | prev | next [-] |
| Not sure. But o3 seems to be 200/10 days, not weekly anymore in my opinion. |
|
| ▲ | artursapek 5 days ago | parent | prev | next [-] |
| Just curious, what do people use these expensive reasoning models for? |
| |
|
| ▲ | clownpenis_fart 4 days ago | parent | prev | next [-] |
| [dead] |
|
| ▲ | jstummbillig 5 days ago | parent | prev [-] |
| If it behaves anything like the GPT-4.5 Limit, it will let you know when you near the limit. |
| |