▲ | GoatInGrey 3 days ago | |
I'm unsure if this also qualifies as incompetence/embryonic understanding, though I've used LLMs for hundreds of hours on development tasks and have also found that sub-agents are not good at programming. They're more suitable for research tasks to provide informed context to the parent agent while isolating it from the token consumption which retrieving that context cost. Zooming out, my findings on LLMs with programming is that they work well in specific patterns and quickly go to shit when completely unsupervised by a SME.
The LLMs all fuck up on something in every task that they perform due to the intersection of operating on assumptions and working on large problem spaces. The amount of effort it takes to completely eliminate the presence of assumptions in the agent make the process slower than writing the code yourself. So people try to find the balance they're comfortable with. |