Remix.run Logo
crowcroft 8 days ago

I think we're probably over using MCPs.

If you're a large org with an API that an ecosystem of other partners use then you should host a remote MCP and then people should connect LLMs to it.

The current model of someone bundling tools into an MCP and then you download and run that MCP locally feels a bit like the wrong path. Tool definitions for LLMs are already pretty standardized if things are just running locally why am I not just importing a package of tools, I'm not sure what the MCP server is adding.

empath75 8 days ago | parent | next [-]

The auth story for MCPs is a complete mess right now, though, which is why people make ones to run locally.

electric_muse 8 days ago | parent | next [-]

That's ironic. I think local MCPs are an auth nightmare.

Just think of all those plaintext auth tokens sitting in well-known locations on your machine.

It's a black hat dream.

We'll see, but I think commercial use of local MCPs is going to be constrained to use cases that only make sense if the MCP is local (e.g. it requires local file access).

For everything else, the only commercially reasonable way to use them is going to be remote streamable HTTP MCPs running in isolated containers

And even then, you need some management and identity plane. So they're going to likely be accessed via an enterprise gateway/proxy to handle things like: - composition -- bundling multiple MCPs into one for easier connection - identities per-user / per-agent - generation of rotatable tokens for headless agents - filtering what features (tools, prompts, resources) flow through into LLM context - basic security features, like tool description whitelisting to prevent rug pulls

MCP is only a protocol, after all. It's not meant to be a batteries-included product.

crowcroft 8 days ago | parent | prev | next [-]

This is why I think we should just be packaging tools into apps though.

Let ChatGPT/Claude/Cursor manage my Oauth tokens, and then just bring tools into those platforms without a whole MCP server in the middle.

kiitos 7 days ago | parent | prev [-]

...no, MCP was always designed to be run locally, the auth mess was the result of people trying to sidestep that design intent and getting grumpy that it didn't work well (surprise, of course not)

jonfw 8 days ago | parent | prev [-]

MCP is just packaging. It's the ideal abstraction for building AI applications.

I think it provides the similar benefits of decoupling the front and back end of a standard app.

I can pick my favorite AI "front end"- whether that's in my IDE as a dev, a desktop app as a business user, or on a server if I'm running an agentic workflow.

MCP allows you to package tools, prompts, etc. in a way that works across any of those front ends.

Even if you don't plan on leveraging the MCP across multiple tools in that way- I do think it has some benefits in de-coupling the lifecycle of the tool development from the model/ UI.

crowcroft 8 days ago | parent [-]

The biggest challenge I have is that setting up and configuring them is a mess. I'm pretty technical and I still find configuration confusing and brittle. Especially if auth is involved.

I work in a marketing team, I would love folks to be able to use Google's Analytics MCP [1]. The idea of getting people into Google Cloud, or setting up and sharing a file with service account credentials is an absolute nightmare.

I don't think these problems can't be solved, and if remote MCPs gain adoption that alone solves a lot of the issues, but the way most MCPs are packaged and shared currently leaves A LOT to be desired.

[1] https://github.com/googleanalytics/google-analytics-mcp