| ▲ | EMM_386 11 hours ago | |||||||
This is great. When I work with AI on large, tricky code bases I try to do a collaboration where it hands off things to me that may result in large number of tokens (excess tool calls, unprecise searches, verbose output, reading large files without a range specified, etc.). This will help narrow down exactly which to still handle manually to best keep within token budgets. Note: "yourusername" in install git clone instructions should be replaced. | ||||||||
| ▲ | winchester6788 5 hours ago | parent | next [-] | |||||||
I had a similar problem, and when claude code (or codex) is running in sandbox, i wanted to put a cap or get notified on large contexts. especially, because once x0K words crossed, the output becomes worser. https://github.com/quilrai/LLMWatcher made this mac app for the same purpose. any thoughts would be appreciated | ||||||||
| ▲ | cedws 8 hours ago | parent | prev | next [-] | |||||||
I've been trying to get token usage down by instructing Claude to stop being so verbose (saying what it's going to do beforehand, saying what it just did, spitting out pointless file trees) but it ignores my instructions. It could be that the model is just hard to steer away from doing that... or Anthropic want it to waste tokens so you burn through your usage quickly. | ||||||||
| ||||||||
| ▲ | kej 10 hours ago | parent | prev | next [-] | |||||||
Would you mind sharing more details about how you do this? What do you add to your AI prompts to make it hand those tasks off to you? | ||||||||
| ▲ | jmuncor 10 hours ago | parent | prev [-] | |||||||
Hahahah just fixed it, thank you so much!!!! Think of extending this to a prompt admin, Im sure there is a lot of trash that the system sends on every query, I think we can improve this. | ||||||||