| ▲ | Kevintbt 9 hours ago | |
I agree with that but this way it's still bloat, if you are coding with ai you are aware that everytime, a model read 100lines and dont find what it needs to modify you bloat the context. I use copilot this days (until june lol) and there a context window measurement and everytime the model read a file to make a change i assure you the window move from for example 8% to 12% (on gpt 400k tokens) its like 16k tokens for reads for something like 10 lines changes so i know about chunking but this is how it works everytime. You can check how claude code introduced us tools steps deletion to unbload the context window aswell months ago. Thank you for the advices Patrick :) | ||