| ▲ | qiller 5 days ago |
| I'm ok using a limited resource _if_ I know how much of it I am using. The lack of visible progress towards limits is annoying. |
|
| ▲ | blalezarian 5 days ago | parent | next [-] |
| Totally agree with this. I live in constant anxiety not knowing how far into my usage I am in all the time. |
|
| ▲ | steveklabnik 5 days ago | parent | prev | next [-] |
| npx ccusage@latest I'm assuming it'll get updated to include these windows as well. Pass in "blocks --live" to get a live dashboard! |
| |
| ▲ | data-ottawa 5 days ago | parent | next [-] | | Oh wow, this showed me the usage stats for the period before ccusage was installed, that’s very helpful especially considering this change. ETA: You don’t need to authenticate or share your login with this utility, basically zero setup. | |
| ▲ | mtmail 5 days ago | parent | prev | next [-] | | Package page (with screenshot) https://www.npmjs.com/package/ccusage | |
| ▲ | bravura 5 days ago | parent | prev [-] | | Does ccusage (or claude code with subscription) actually tell you what the limits are or how close you are too them? | | |
|
|
| ▲ | flkiwi 5 days ago | parent | prev | next [-] |
| It's not exactly the same thing, but imagine my complete surprise when, in the middle of a discussion with Copilot and without warning, it announced that the conversation had reached its length limit and I had to start a new one with absolutely no context from the current one. Copilot has many, many usability quirks, but that was the first that actually made me mad. |
| |
| ▲ | jononor 5 days ago | parent | next [-] | | ChatGPT and Claude do the same. And I have noticed that model performance can often degrade a lot before such a hard limit. So even when not hitting the hard limit, splitting out to a new session can be useful.
Context management is the new prompt engineering... | |
| ▲ | stronglikedan 5 days ago | parent | prev [-] | | The craziest thing to me is that it actually completely stopped you in your tracks instead of upselling you on the spot to continue. |
|
|
| ▲ | mvieira38 5 days ago | parent | prev [-] |
| You can't really predict usage of output tokens, too, so this is especially concerning |
| |
| ▲ | qiller 5 days ago | parent [-] | | Like when Claude suddenly decides it's not happy with a tiny one-off script and generates 20 refined versions :D |
|