▲ | saberience 12 days ago | ||||||||||||||||
It's not complicated at all. All it does is expose methods as a "tool" which is then brought back to your LLM and defined with its name, description and input parameters. E.g. Name: "MySqlTool", Description: "Allows arbitrary MySQL queries to the XYZ database", Parameters: "string: sqlToExecute" The MCP Client (e.g. Claude Desktop, Claude Code), is configured to talk to an MCP server via stdio or sse, and calls a method like "tools/list", the server just sends a list back (in JSON) of all the tools, names, descriptions, params. Then, if the LLM gets a query that mentions e.g. do a web search, or a web scraping, etc, it just outputs a tool use token then stops inferencing. Then the code calls that tool via stdio/sse (json-rpc), to the MCP server, which just runs that method, returns the result, then its added to the message history in the LLM, then inferencing runs again from the beginning. | |||||||||||||||||
▲ | runako 12 days ago | parent [-] | ||||||||||||||||
I think people who have been building with LLMs have a different view on what is complicated vs not :-) It may be easy for you to configure, but you dropped some acronyms in there that I would have to look up. I have definitely not personally set up anything like this. | |||||||||||||||||
|