| ▲ | simonw a day ago | |||||||||||||||||||
I don't think subagents are representative of anything particularly interesting on the "agents can run themselves" front. They're tool calls. Claude Code provides a tool that lets the model say effectively:
The current frontier models are all capable of "prompting themselves" in this way, but it's really just a parlor trick to help avoid burning more tokens in the top context window.It's a really useful parlor trick, but I don't think it tells us anything profound. | ||||||||||||||||||||
| ▲ | ctoth a day ago | parent [-] | |||||||||||||||||||
The mechanism being simple is the interesting part. If one large complex goal can be split into subgoals and the subgoals completed without you, then you need a lot fewer humans to do a lot more work. The OP says AI requires human interaction to work. This simply isn't true. You know yourself that as agents get more reliable you can delegate more to them, including having them launch more subagents, thereby getting more work done, with fewer and fewer humans. The unlock is the Task tool, but the power comes from the smarter and smarter models actually being able to delegate hierarchical tasks well! | ||||||||||||||||||||
| ||||||||||||||||||||