| ▲ | mzl 3 hours ago |
| An LLM model only outputs tokens, so this could be seen as an extension of tool calling where it has trained on the knowledge and use-cases for "tool-calling" itself as a sub-agent. |
|
| ▲ | XCSme 3 hours ago | parent [-] |
| Ok, so agent swarm = tool calling where the tool is a LLM call and the argument is the prompt |
| |
| ▲ | IanCal 34 minutes ago | parent | next [-] | | Yes largely, although they’ve trained a model specifically for this task rather than using the base model and a bit of prompting. | |
| ▲ | dcre 2 hours ago | parent | prev [-] | | Sort of. It’s not necessarily a single call. In the general case it would be spinning up a long-running agent with various kinds of configuration — prompts, but also coding environment and which tools are available to it — like subagents in Claude Code. |
|