Remix.run Logo
michaelanckaert a day ago

The "Tool Search Tool" is like a clever addition that could easily be added yourself to other models / providers. I did something similar with a couple of agents I wrote.

First LLM Call: only pass the "search tool" tool. The output of that tool is a list of suitable tools the LLM searched for. Second LLM Call: pass the additional tools that were returned by the "search tool" tool.

slimslenders 5 hours ago | parent | next [-]

I think this is very true. Tool search tools can be model agnostic. And programmatic tool calling really just needs a code sandbox tool. We've provided some examples of these patterns on top of a local docker engine (oss project is here https://github.com/docker/mcp-gateway/ and blog is https://www.docker.com/blog/dynamic-mcps-stop-hardcoding-you...).

stavros a day ago | parent | prev | next [-]

When reading the article, I thought this would be an LLM call, ie the main agent would call `find_tool("I need something that can create GitHub PRs")`, and then a subagent with all the MCP tools loaded in its context would return the names of the suitable ones.

I guess regex/full text search works too, but the LLM would be much less sensitive to keywords.

RobertDeNiro a day ago | parent | prev [-]

Since its a tool itself, I dont see the benefit of relying on Anthropic for this. if anything it now becomes vendor lock in.

michaelanckaert a day ago | parent | next [-]

Correct, I wouldn't use it myself as it's a trivial addition to your implementation. Personally I keep all my work in this space as provider agnostic as I can. When the bubble eventually pops there will be victims, and you don't want a stack that's hard coded to one of the casualties.

BoorishBears a day ago | parent | prev [-]

They can post-train the model on usage of their specific tool along with the specific prompt they're using.

LLMs generalize obviously, but I also wouldn't be shocked if it performs better than a "normal" implementation.