Remix.run Logo
landl0rd 5 days ago

You can tell how it’s intentional with both OpenAI and Anthropic by how they’re intentionally made opaque. I cant see a nice little bar with how much I’ve used versus have left on the given rate limits so it’s pressuring users to hoard. Because it prevents them from budgeting it out and saying “okay I’ve used 1/3 of my quota and it’s Wednesday, I can use more faster.”

xpe 5 days ago | parent | next [-]

> pressures users to hoard

As a pedantic note, I would say 'ration'. Things you hoard don't magically go away after some period of time.

zamadatix 4 days ago | parent | next [-]

FWIW neither hoard nor ration imply anything about permanence of the thing to me. Whether you were rationed bread or you hoarded bread, the bread isn't going to be usable forever. At the same time whether you were rationed sugar or hoarded sugar, the sugar isn't going to expire (with good storage).

Rationed/hoarded do imply, to me, something different about how the quantity came to be though. Rationed being given or setting aside a fixed amount, hoarded being that you stockpiled/amassed it. Saying "you hoarded your rations" (whether they will expire) does feel more on the money than "you ration your rations" from that perspective.

I hope this doesn't come off too "well aktually", I've just been thinking about how I still realize different meanings/origins of common words later in life and the odd things that trigger me to think about it differently for the first time. A recent one for me was that "whoever" has the (fairly obvious) etymology of who+ever https://www.etymonline.com/word/whoever vs something like balloon, which has a comparatively more complex history https://www.etymonline.com/word/balloon

mattkrause 4 days ago | parent | next [-]

For me, the difference between ration and hoard is the uhh…rationality of the plan.

Rationing suggests a deliberate, calculated plan: we’ll eat this much at these particular times so our food lasts that long. Hoard seems more ad hoc and fear-driven: better keep yet another beat-up VGA cable, just in case.

jjani 4 days ago | parent [-]

> Hoard seems more ad hoc and fear-driven: better keep yet another beat-up VGA cable, just in case.

Counterexample: animals hoarding food for winter time, etc.

nothrabannosir 4 days ago | parent [-]

Rather a corroborating example than a counter, if you believe how many nuts squirrels lose sight of after burying them.

xpe 4 days ago | parent [-]

Exactly. How many random computer dongles and power supplies get buried in sundry boxes that are effectively lost to the world?

kanak8278 4 days ago | parent | prev | next [-]

It can only happens in HackerNews that people talking about Claude Code limit, can start discussing what is a better a word for explaining it. :-)

I just love this community for these silly things.

14123newsletter 4 days ago | parent | prev | next [-]

Isn't hoarding means you can get more bread ? While rationing means: "here is 1kg, use it however you want but you can't get more".

zamadatix 4 days ago | parent [-]

Hoarding doesn't really imply how you got it, just that you stockpile once you do. I think you're bang on rationing - it's about assigning the fixed amount. The LLM provider does the rationing, the LLM user hoards their rations.

One could theoretically ration their rations out further... but that would require knowing the usage to the point to set the remaining fixed amounts - which is precisely whT's missing in the interface.

randomcarbloke 4 days ago | parent | prev [-]

Bread can be rationed but cannot be hoarded.

nine_k 4 days ago | parent | prev | next [-]

Rationing implies an ability to measure: this amount per day. But measuring the remaining amount is exactly what Claude Code API does not provide.

So, back to hoarding.

landl0rd 4 days ago | parent | prev [-]

Rationing is precisely what we want to do: I have x usage this week; let me determine precisely how much I can use without going over. Hoarding implies a less reasoned path of “I never know when I might run out so I must use as little as possible, save as much as I can.” One can hoard gasoline but it still expires past a point.

sothatsit 4 days ago | parent | prev | next [-]

Anthropic also does this because they will dynamically change the limits to manage load. Tools like ccusage show you how much you've used and I can tell sometimes that I get limited with significantly lower usage than I would usually get limited for.

TheOtherHobbes 4 days ago | parent [-]

Which is a huge problem, because you literally have no idea what you're paying for.

One day a few of hours of prompting is fine, another you'll hit your weekly limit and you're out for seven days.

While still paying your subscription.

I can't think of any other product or service which operates on this basis - where you're charged a set fee, but the access you get varies from hour to hour entirely at the provider's whim. And if you hit a limit which is a moving target you can't even check you're locked out of the service.

It's ridiculous. Begging for a law suit, tbh.

lukaslalinsky 4 days ago | parent [-]

What happens when you have a gym membership, but you go there during their busy hours?

What they could do is pay as you go, with pricing increasing with the demand (Uber style), but I don't think people would like that much.

deeth_starr_v 4 days ago | parent [-]

Your analogy would work if the gym would randomly suspend your membership for a week if you worked out too much during peak hours

canada_dry 4 days ago | parent | prev | next [-]

OpenAI's "PRO" subscription is really a waste of money IMHO for this and other reasons.

Decided to give PRO a try when I kept getting terrible results from the $20 option.

So far it's perhaps 20% improved in complex code generation.

It still has the extremely annoying ~350 line limit in its output.

It still IGNORES EXPLICIT CONTINUOUS INSTRUCTIONS eg: do not remove existing comments.

The opaque overriding rules that - despite it begging forgiveness when it ignores instructions - are extremely frustrating!!

JoshuaDavid 4 days ago | parent | next [-]

One thing that has worked for me when I have a long list of requirements / standards I want an LLM agent to stick to while executing a series of 5 instructions is to add extra steps at the end of the instructions like "6. check if any of the code standards are not met - if not, fix them and return to step 5" / "7. verify that no forbidden patterns from <list of things like no-op unit tests, n+1 query patterns, etc> exist in added code - if you find any, fix them and return to step 5" etc.

Often they're better at recognizing failures to stick to the rules and fixing the problems than they are at consistently following the rules in a single shot.

This does mean that often having an LLM agents so a thing works but is slower than just doing it myself. Still, I can sometimes kick off a workflow before joining a meeting, so maybe the hours I've spent playing with these tools will eventually pay for themselves in improved future productivity.

jmaker 4 days ago | parent | prev [-]

There are things it’s great at and things it deceives you with. In many things I needed it to check something for me I knew was a problem, o3 kept insisting it were possible due to reasons a,b,c, and thankfully gave me links. I knew it used to be a problem so surprised I followed the links only to read black on white it still wasn’t. So I explained to o3 that it’s wrong. Two messages later we were back at square one. One week later it didn’t update its knowledge. Months later it’s still the same.

But at things I have no idea about like medicine it feels very convincing. Am I in hazard?

People don’t understand Dunning-Kruger. People are prone to biases and fallacies. Likely all LLMs are inept at objectivity.

My instructions to LLMs are always strictness, no false claims, Bayesian likelihoods on every claim. Some modes ignore the instructions voluntarily, while others stick strictly to them. In the end it doesn’t matter when they insist on 99% confidence on refuted fantasies.

namibj 4 days ago | parent [-]

The problem is that all current mainstream LLMs are autoregressive decoder-only, mostly but not exclusively transformers. Their math can't apply modifiers like "this example/attempt there is wrong due to X,Y,Z" to anything that came before the modifier clause in the prompt. Despite how enticing these models are to train, these limitations are inherent. (For this specific situation people recommend going back to just before the wrong output and editing the message to reflect this understanding, as the confidently wrong output with no advisory/correcting pre-clause will "pollute the context": the model will look at the context for some aspects coded into high(-er)-layer token embeddings, inherently can't include the correct/wrong aspect because we couldn't apply the "wrong"/correction to the confidently-wrong tokens, thus retrieves the confidently-wrong tokens, and subsequently spews even more BS. Similar to how telling a GPT2/GPT3 model it's an expert on $topic made it actually be better on said topic, this affirmation of that the model made an error will prime the model to behave in a way that it gets yelled at again... sadly.)

brookst 4 days ago | parent | prev [-]

I think the simple product prioritization explanation makes way more sense than a a second-order conspiracy to trick people into hoarding.

Reality is probably that there’s a backlog item to implement a view, but it’s hard to prioritize over core features.

parineum 4 days ago | parent | next [-]

> Reality is probably that there’s a backlog item to implement a view, but it’s hard to prioritize over core features.

It's even harder to prioritize when the feature you pay to develop probably costs you money.

Zacharias030 4 days ago | parent | prev [-]

I hear OpenAI and Anthropic are making tools that are supposedly pretty good at helping with creating a view from a backlog.

Back to the conspiracy ^^

brookst 4 days ago | parent | next [-]

If you’re on HN you’ve probably been around enough to know it’s never that simple. You implement the counter, now customer service needs to be able to provide documentation, users want to argue, async systems take hours to update, users complain about that, you move the batch accounting job to sync, queries that fail still end up counting, and on and on.

They should have an indicator, for sure. But I at least have been around the block enough to know that declaring “it would be easy” for someone else’s business and tech stack is usually naive.

bluelightning2k 4 days ago | parent | prev [-]

"We were going to implement a counter but hit out weekly Claude code limit before we could do it. Maybe next week? Anthropic."