| ▲ | rfw300 a day ago | |
I am extremely excited to use programmatic tool use. This has, to date, been the most frustrating aspect of MCP-style tools for me: if some analysis requires the LLM to first fetch data and then write code to analyze it, the LLM is forced to manually copy a representation of the data into its interpreter. Programmatic tool use feels like the way it always should have worked, and where agents seem to be going more broadly: acting within sandboxed VMs with a mix of custom code and programmatic interfaces to external services. This is a clear improvement over the LangChain-style Rupe Goldberg machines that we dealt with last year. | ||
| ▲ | zbowling 5 hours ago | parent | next [-] | |
I built a MCP server that solves this actually. It works like a tool calling proxy that calls child servers but instead of serving them up as direct tool calls, it exposes them as typescript defintions, asks your LLM to write code to invoke them all together, and then executes that typescript in a restricted VM to do tool calling indirectly. If you have tools that pass data between each other or need some kind of parsing or manipulation of output, like the tool call returns json, it's trivial to transform it. https://github.com/zbowling/mcpcodeserver | ||
| ▲ | menix a day ago | parent | prev [-] | |
smolagents by Hugging Face tackles your issues with MCP tools. They added support for the output schema and structured output provided by the latest MCP spec. This way print and inspect is no longer necessary. https://huggingface.co/blog/llchahn/ai-agents-output-schema | ||