Remix.run Logo
ketzo 5 days ago

What does your Claude code usage look like if you’re getting limited in 30 minutes without running multiple instances? Massive codebase or something?

blitzar 5 days ago | parent [-]

I set claude about writing docstrings on a handful of files - 4/5 files couple 100 lines each - couple of classes in each - it didnt need to scan codebase (much).

Low danger task so I let it do as it pleased - 30 minutes and was maxed out. Could probably have reduced context with a /clear after every file but then I would have to participate.

tlbsofware 5 days ago | parent | next [-]

You can tell it to review and edit each file within a Task/subagent and can even say to run them in parallel and it will use a separate context for each file without having to clear them manually

blitzar 5 days ago | parent [-]

Every day is a school day - I feel like this is a quicker way to burn usage but it does manage context nicely.

tlbsofware 5 days ago | parent [-]

I haven’t ran any experiments about token usage with tasks, but if you ran them all together without tasks, then each files full operation _should_ be contributing as cached tokens for each subsequent request. But if you use a task then only the summary returned from that task would contribute to the cached tokens. From my understanding it actually might save you usage rates (depending on what else it’s doing within the task itself).

I usually use Tasks for running tests, code generation, summarizing code flows, and performing web searches on docs and summarizing the necessary parts I need for later operations.

Running them in parallel is nice if you want to document code flows and have each task focus on a higher level grouping, that way each task is hyper focused on its own domain and they all run together so you don’t have to wait as long, for example:

- “Feature A’s configuration” - “Feature A’s access control” - “Feature A’s invoicing”

stuaxo 5 days ago | parent | prev | next [-]

I hope you thoroughly go through these as a human, purely AI written stuff can be horrible to read.

blitzar 5 days ago | parent [-]

Docstring slop is better than code slop - anyway that is what git commits are for - and I have 4.5 hours to do that till next reset.

debo_ 5 days ago | parent [-]

Coding is turning into an MMO!

Kurtz79 4 days ago | parent | prev [-]

If I understand correctly, looking at API pricing for Sonnet, output tokens are 5 times more expensive than input tokens.

So, if rate limits are based on an overall token cost, it is likely that one will hit them first if CC reads a few files and writes a lot of text as output (comments/documentation) rather than if it analyzes a large codebase and then makes a few edits in code.