▲ | bredren 3 days ago | |||||||
This problem is real. Claude Projects, chatgpt projects, Sourcegraph Cody context building, MCP file systems, all of these are black boxes of what I can only describe as lossy compression of context. Each is incentivized to deliver ~”pretty good” results at the highest token compression possible. The best way around this I’ve found is to just own the web clients by including structured, concatenation related files directly in chat contexts. Self plug but super relevant: I built FileKitty specifically to aid this, which made HN front page and I’ve continued to improve: https://news.ycombinator.com/item?id=40226976 If you can prepare your file system context yourself using any workflow quickly, and pair it with appropriate additional context such as run output, problem description etc, you can get excellent results and you can pound away at OpenAI or Anthropic subscription refining the prompt or updating the file context. I have been finding myself spending more time putting together prompt complexity for big difficult problems, they would not make sense to solve in the IDE. | ||||||||
▲ | airstrike 3 days ago | parent | next [-] | |||||||
> The best way around this I’ve found is to just own the web clients by including structured, concatenation related files directly in chat contexts. Same. I used to run a bash script that concatenates files I'm interested in and annotates their path/name to the top in a comment. I haven't needed that recently as I think the # of attachments for Claude has increased (or I haven't needed as many small disparate files at once) | ||||||||
▲ | asadm 3 days ago | parent | prev [-] | |||||||
filekitty is pretty cool! | ||||||||
|