▲ | rocqua 3 days ago | |
I wonder why we can't have one LLM generate this understanding for another? Perhaps this is where teaming of LLMs gets its value. In managing high and low level context in different context windows. | ||
▲ | mixedCase 3 days ago | parent [-] | |
This is a thing and doesn't require a separate model. You can set up custom prompts that will, based on another prompt describing the task to achieve, generate information about the codebase and a set of TODOs to accomplish the task, generating markdown files with a summarized version of the relevant knowledge and prompting you again to refine that summary if needed. You can then use these files to let the agent take over without going on a wild goose chase. |