Remix.run Logo
jascha_eng 3 hours ago

Hmm I like the idea of providing a unified interface to all LLMs to interact with outside data. But I don't really understand why this is local only. It would be a lot more interesting if I could connect this to my github in the web app and claude automatically has access to my code repositories.

I guess I can do this for my local file system now?

I also wonder if I build an LLM powered app, and currently simply to RAG and then inject the retrieved data into my prompts, should this replace it? Can I integrate this in a useful way even?

The use case of on your machine with your specific data, seems very narrow to me right now, considering how many different context sources and use cases there are.

jspahrsummers 3 hours ago | parent | next [-]

We're definitely interested in extending MCP to cover remote connections as well. Both SDKs already support an SSE transport with that in mind: https://modelcontextprotocol.io/docs/concepts/transports#ser...

However, it's not quite a complete story yet. Remote connections introduce a lot more questions and complexity—related to deployment, auth, security, etc. We'll be working through these in the coming weeks, and would love any and all input!

jascha_eng 3 hours ago | parent [-]

Will you also create some info on how other LLM providers can integrate this? So far it looks like it's mostly a protocol to integrate with anthropic models/desktop client. That's not what I thought of when I read open-source.

It would be a lot more interesting to write a server for this if this allowed any model to interact with my data. Everyone would benefit from having more integration and you (anthropic) still would have the advantage of basically controlling the protocol.

somnium_sn 2 hours ago | parent [-]

Note that both Sourcegraph's Cody and the Zed editor support MCP now. They offer other models besides Claude in their respective application.

The Model Context Protocol initial release aims to solve the N-to-M relation of LLM applications (mcp clients) and context providers (mcp servers). The application is free to choose any model they want. We carefully designed the protocol such that it is model independent.

jascha_eng 2 hours ago | parent [-]

LLM applications just means chat applications here though right? This doesn't seem to cover use cases of more integrated software. Like a typical documentation RAG chatbot.

mike_hearn an hour ago | parent | prev | next [-]

Local only solves a lot of problems. Our infrastructure does tend to assume that data and credentials are on a local computer - OAuth is horribly complex to set up and there's no real benefit to messing with that when local works fine.

bryant 3 hours ago | parent | prev | next [-]

> It would be a lot more interesting if I could connect this to my github in the web app and claude automatically has access to my code repositories.

From the link:

> To help developers start exploring, we’re sharing pre-built MCP servers for popular enterprise systems like Google Drive, Slack, GitHub, Git, Postgres, and Puppeteer.

jascha_eng 3 hours ago | parent [-]

Yes but you need to run those servers locally on your own machine. And use the desktop client. That just seems... weird?

I guess the reason for this local focus is, that it's otherwise hard to provide access to local files. Which is a decently large use-case.

Still it feels a bit complicated to me.

singularity2001 3 hours ago | parent | prev [-]

For me it's complementary to openai's custom GPTs which are non-local.