| ▲ | ctoth a day ago | |
The mechanism being simple is the interesting part. If one large complex goal can be split into subgoals and the subgoals completed without you, then you need a lot fewer humans to do a lot more work. The OP says AI requires human interaction to work. This simply isn't true. You know yourself that as agents get more reliable you can delegate more to them, including having them launch more subagents, thereby getting more work done, with fewer and fewer humans. The unlock is the Task tool, but the power comes from the smarter and smarter models actually being able to delegate hierarchical tasks well! | ||
| ▲ | otabdeveloper4 a day ago | parent | next [-] | |
You misunderstand. The only reason to launch subagents is to avoid poisoning the LLM's already small context window with unrelated tokens. It doesn't make the LLM smarter or more capable. | ||
| ▲ | suttontom a day ago | parent | prev | next [-] | |
Wtf? A sub-agent is a tool you give an agent and say "If you need to analyze logs delegate to the logs_viewer agent" so that the context window doesn't fill up with hundreds of thousands of tokens unnecessarily. In what universe do you live in where that mechanism somehow means you need fewer humans? Do you think this means "Build a car" can be accomplished just because an LLM can send a prompt to another LLM who reports back a response? | ||
| ▲ | a day ago | parent | prev [-] | |
| [deleted] | ||