| ▲ | Bobaso 12 hours ago |
| Moderne Ai agent tool have have a setting where you can trimm down the numbers of tools from an MCP server. Usefull to avoid overwhelming the LLM with 80 tools description when you only need 1 |
|
| ▲ | the_mitsuhiko 12 hours ago | parent | next [-] |
| I don't find that to help much at all, particularly because some tools really only make sense with a bunch of other tools and then your context is already polluted. It's surprisingly hard to do this right, unless you have a single tool MCP (eg: a code/eval based tool, or an inference based tool). |
| |
| ▲ | stavros 12 hours ago | parent [-] | | Don't you have a post about writing Python instead of using MCP? I can't see how MCP is more efficient than giving the LLM a bunch of function signatures and allow it to call them, but maybe I'm not familiar enough with MCP. | | |
| ▲ | the_mitsuhiko 12 hours ago | parent [-] | | > Don't you have a post about writing Python instead of using MCP? Yes, and that works really well. I also tried various attempts of letting agents to write code that exposes MCP tool calls via an in-language API. But it's just really, really hard to work with because MCP tools are generally not in the training set, but normal APIs are. | | |
| ▲ | stavros 12 hours ago | parent [-] | | Yeah, I've always thought that your proposal was much better. I don't know why one of the big companies hasn't released something that standardised on tool-calling via code, hm. |
|
|
|
|
| ▲ | incoming1211 10 hours ago | parent | prev [-] |
| Remote MCP with API key which has claims works well to reduce the tool count to only that of what you need. |