| ▲ | patrick-elmore 10 hours ago | |
It should never read every full file. It should be gripping to find candidates to read, and then read chunks of the file from the hits to see if they are genuinely relevant to whatever you are trying to gather context for. If it reads a chunk of the file surrounding wherever you got the grep hit, and it appears to be relevant, then it can pull in a larger portion or the entire file, if appropriate. | ||
| ▲ | Kevintbt 9 hours ago | parent [-] | |
I agree with that but this way it's still bloat, if you are coding with ai you are aware that everytime, a model read 100lines and dont find what it needs to modify you bloat the context. I use copilot this days (until june lol) and there a context window measurement and everytime the model read a file to make a change i assure you the window move from for example 8% to 12% (on gpt 400k tokens) its like 16k tokens for reads for something like 10 lines changes so i know about chunking but this is how it works everytime. You can check how claude code introduced us tools steps deletion to unbload the context window aswell months ago. Thank you for the advices Patrick :) | ||