| ▲ | 0x696C6961 5 hours ago |
| In what world is this simpler than just giving the agent a list of functions it can call? |
|
| ▲ | Mic92 4 hours ago | parent | next [-] |
| So usually MCP tool calls a sequential and therefore waste a lot of tokens. There is some research from Antrophic (I think there was also some blog post from cloudflare) on how code sandboxes are actually a more efficient interface for llm agents because they are really good at writing code and combining multiple "calls" into one piece of code. Another data point is that code is more deterministic and reliable so you reduce the hallucination of llms. |
| |
| ▲ | foota 4 hours ago | parent [-] | | What do the calls being sequential have to do with tokens? Do you just mean that the LLM has to think everytime they get a response (as opposed to being able to compose them)? | | |
| ▲ | zozbot234 4 hours ago | parent [-] | | LLMs can use CLI interfaces to compose multiple tool calls, filter the outputs etc. instead of polluting their own context with a full response they know they won't care about. Command line access ends up being cleaner than the usual MCP-and-tool-calls workflow. It's not just Anthropic, the Moltbot folks found this to be the case too. | | |
| ▲ | foota 4 hours ago | parent [-] | | That makes sense! The only flaw here imo is that sometimes that thinking is useful. Sub-agents for tool calls imo make a nice sort of middle ground where they can both be flexible and save context. Maybe we need some tool call composing feature, a la io_uring :) |
|
|
|
|
| ▲ | dvt 4 hours ago | parent | prev [-] |
| Who implements those functions? E.g., store.order has to have its logic somewhere. |