▲ | sdesol 3 days ago | ||||||||||||||||
Looking at the code, it does have some sort of automatic discovery. I also don't know how scalable Claude Code is. I've spent over a decade thinking about code search, so I know what the limitations are for enterprise code. One of the neat tricks that I've developed is, I would load all my backend code for my search component and then I would ask the LLM to trace a query and create a context bundle for only the files that are affected. Once the LLM has finished, I just need to do a few clicks to refine a 80,000 token size window down to about 20,000 tokens. I would not be surprised if this is one of the tricks that it does as it is highly effective. Also, yes my tool is manual, but I treat conversations as durable asset so in the future, you should be able to say, last week I did this, load the same files and LLM will know what files to bring into context. | |||||||||||||||||
▲ | pacoWebConsult 3 days ago | parent | next [-] | ||||||||||||||||
FWIW Claude code conversations are also durable. You can resume any past conversation in your project. They're stored as jsonl files within your `$HOME/.claude` directory. This retains the actual context (including your prompts, assistant responses, tool usages, etc) from that conversation, not just the files you're affecting as context. | |||||||||||||||||
| |||||||||||||||||
▲ | handfuloflight 3 days ago | parent | prev | next [-] | ||||||||||||||||
Excellent, I look forward to trying it out, at minimum to wean off dependency to Claude Code and it's likely current state of overspending on context. I agree with looking at conversations as durable assets. | |||||||||||||||||
| |||||||||||||||||
▲ | ec109685 3 days ago | parent | prev [-] | ||||||||||||||||
It greps around the code like an intern would. You have to have patience and be willing to document workflows and correct when it gets things wrong via CLAUDE.md files. | |||||||||||||||||
|