▲ | crowcroft 8 days ago | |||||||||||||||||||
I think we're probably over using MCPs. If you're a large org with an API that an ecosystem of other partners use then you should host a remote MCP and then people should connect LLMs to it. The current model of someone bundling tools into an MCP and then you download and run that MCP locally feels a bit like the wrong path. Tool definitions for LLMs are already pretty standardized if things are just running locally why am I not just importing a package of tools, I'm not sure what the MCP server is adding. | ||||||||||||||||||||
▲ | empath75 8 days ago | parent | next [-] | |||||||||||||||||||
The auth story for MCPs is a complete mess right now, though, which is why people make ones to run locally. | ||||||||||||||||||||
| ||||||||||||||||||||
▲ | jonfw 8 days ago | parent | prev [-] | |||||||||||||||||||
MCP is just packaging. It's the ideal abstraction for building AI applications. I think it provides the similar benefits of decoupling the front and back end of a standard app. I can pick my favorite AI "front end"- whether that's in my IDE as a dev, a desktop app as a business user, or on a server if I'm running an agentic workflow. MCP allows you to package tools, prompts, etc. in a way that works across any of those front ends. Even if you don't plan on leveraging the MCP across multiple tools in that way- I do think it has some benefits in de-coupling the lifecycle of the tool development from the model/ UI. | ||||||||||||||||||||
|