| ▲ | Mic92 4 hours ago | ||||||||||||||||
So usually MCP tool calls a sequential and therefore waste a lot of tokens. There is some research from Antrophic (I think there was also some blog post from cloudflare) on how code sandboxes are actually a more efficient interface for llm agents because they are really good at writing code and combining multiple "calls" into one piece of code. Another data point is that code is more deterministic and reliable so you reduce the hallucination of llms. | |||||||||||||||||
| ▲ | foota 4 hours ago | parent [-] | ||||||||||||||||
What do the calls being sequential have to do with tokens? Do you just mean that the LLM has to think everytime they get a response (as opposed to being able to compose them)? | |||||||||||||||||
| |||||||||||||||||