▲ | handfuloflight 3 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Well you should look at it, because it's not going through all files. I looked at your product and the workflow is essentially asking me to do manually what Claude Code does auto. Granted, manually selecting the context will probably lead to lower costs in any case because Claude Code invokes tool calls like grep to do its search, so I do see merit in your product in that respect. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | sdesol 3 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Looking at the code, it does have some sort of automatic discovery. I also don't know how scalable Claude Code is. I've spent over a decade thinking about code search, so I know what the limitations are for enterprise code. One of the neat tricks that I've developed is, I would load all my backend code for my search component and then I would ask the LLM to trace a query and create a context bundle for only the files that are affected. Once the LLM has finished, I just need to do a few clicks to refine a 80,000 token size window down to about 20,000 tokens. I would not be surprised if this is one of the tricks that it does as it is highly effective. Also, yes my tool is manual, but I treat conversations as durable asset so in the future, you should be able to say, last week I did this, load the same files and LLM will know what files to bring into context. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|