| ▲ | storystarling 13 hours ago | ||||||||||||||||||||||
How did you handle the context window for 20k lines? I assume you aren't feeding the whole codebase in every time given the API costs. I've struggled to keep agents coherent on larger projects without blowing the budget, so I'm curious if you used a specific scoping strategy here. | |||||||||||||||||||||||
| ▲ | simonw 12 hours ago | parent | next [-] | ||||||||||||||||||||||
GPT-5.2 has a 400,000 token context window. Claude Opus 4.5 is just 200,000 tokens. To my surprise this doesn't seem to limit their ability to work with much larger codebases - the coding agent harnesses have got really good at grepping for just the code that they need to have in-context, similar to how a human engineer can make changes to a million lines of code without having to hold it all in their head at once. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | embedding-shape 10 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
I didn't, Codex (tui/cli) did, it does it all by itself. I have one REQUIREMENTS.md which is specific to the project, a AGENTS.md that I reuse across most projects, then I give Codex (gpt-5.2 with reasoning effort set to xhigh) a prompt + screenshot, tells it to get it to work somewhat similar, waits until it completes, reviewed that it worked, then continued. Most of the time when I develop professionally, I restart the session after each successful change, for this project, I initially tried to let one session go as long as possible, but eventually I reverted back to my old behavior of restarting from 0 after successful changes. For knowing what file it should read/write, it uses `ls`, `tree` and `ag ` most commonly, there is no out-of-band indexing or anything, just a unix shell controlled by a LLM via tool calls. | |||||||||||||||||||||||
| ▲ | nurettin 12 hours ago | parent | prev [-] | ||||||||||||||||||||||
You don't load the entire project into the context. You let the agent work on a few 600-800 line files one feature at a time. | |||||||||||||||||||||||
| |||||||||||||||||||||||