▲ | nicwolff 5 days ago | |
I'm not chatting with the LLM – I'm giving one LLM in "orchestrator mode" a detailed description of my required change, plus a ton of "memory bank" context about the architecture of the app, the APIs it calls, our coding standards, &c. Then it uses other LLMs in "architect mode" or "ask mode" to break out the task into subtasks and assigns them to still other LLMs in "code mode" and "debug mode". When they're all done I review the output and either clean it up a little and open a PR, or throw it away and tune my initial prompt and the memory bank and start over. They're just code-generating machines, not real programmers that it's worth iterating with – for one thing, they won't learn anything that way. |