Remix.run Logo
ivape 8 days ago

What about LangChain makes more sense? It’s one of the most prematurely complex libs I’ve seen. I’m calling it right now, LangChain is going to run a mind fuck on everyone and convince people that’s actually how complicated orchestrating LLM control flow should be. The community needs to fight this framework off.

That’s besides the point. MCP servers let you discover function interfaces that you’ll have to implement yourself (in which case, yeah, what’s the point of this? I want the whole function body).

fennecfoxy 8 days ago | parent | next [-]

Yup exactly. It's all just state machines. Really nothing more than that.

It's like all these lang* frameworks are pretending that they can solve core deficiencies in the model, whereas most stuff is just workarounds.

We do have to glue model stuff together _somehow_ but there's no reason that it needs to be as complex as most of these frameworks are setting out to be.

diggan 8 days ago | parent | prev | next [-]

> The community needs to fight this framework off.

Why? The people who been around for a while, already avoid it because they've either tried it before, or poked around in the source and then we ran away quickly. If people start using stuff without even the slightest amount of thinking beforehand, then that's their prerogative, why would it be up to the community hive-mind to "chose" what tools others should use?

lyu07282 8 days ago | parent [-]

Agreed except we end up with a lot of junior people in the space who learned and used only langchain, who we then have to unlearn all the langchain nonsense when we hire them. Or we grep -v langchain cvs/

dougbright 8 days ago | parent | prev [-]

My bad. I shouldn’t have mentioned LangChain here because it’s a little besides my point. What I mean is, MCP seems designed for a world where users talk to an LLM, and the LLM calls software tools.

For the foreseeable future, especially in a business context, isn’t it more likely that users will still interact with structured software applications, and the applications will call the LLM? In that case, where does MCP fit into that flow?

anthonypasq 8 days ago | parent | next [-]

it separates FE and BE for agent teams just like we did with web apps. the team building your agent framework might not know the business domain of every piece of your data/api space that your agent will need to interact with. in that case, it makes sense for your differnet backend teams to also own the mcp server that your companies agent team will utilize.

8 days ago | parent [-]
[deleted]
ivape 8 days ago | parent | prev | next [-]

Yeah I don’t know. Let’s a say a org wants to do discovery of what functions are available for an app across the org. Okay, that’s interesting. But, each team can just also import a big file called all_functions.txt.

A swagger api is already kind of like an MCP, or really any existing REST api (even better because you don’t have to implement the interface). If I wanted to give my LLM brand new functionality, all I’d have to do is define out tool use for <random_api>, with zero implementation. I could also just point it to a local file and say here are the functions locally available.

Remember, the big hairy secret is that all of these things just plop out a blob of text that you paste back into the LLM prompt (populating context history). That’s all these things do.

Someone is going to have to unconfuse me.

8 days ago | parent | next [-]
[deleted]
anthonypasq 8 days ago | parent | prev [-]

it separates FE and BE for agent teams just like we did with web apps. the team building your agent framework might not know the business domain of every piece of your data/api space that your agent will need to interact with. in that case, it makes sense for your differnet backend teams to also own the mcp server that your companies agent team will utilize.

ivape 8 days ago | parent [-]

Why don’t they just own a REST or RPC server? This is the part of the MCP motivation I’m not totally getting. In fact, you can prove to yourself that your LLM can hook into almost any existing REST api in a few minutes, which gives it more existing options and functionality than just about anything else as it stands now.

Things like swagger or graphql already provide you discovery.

dragonwriter 8 days ago | parent [-]

> This is the part of the MCP motivation I’m not totally getting

Would it help you to know that the original use case of MCP was communicating information about and facilitating communication with servers that the LLM frontend would run locally and communicate with over stdio, and that remains an important use case?

tomhallett 8 days ago | parent | prev [-]

Total beginner question: if the “structured software application” gives llm prompt “plan out what I need todo for my upcoming vacation to nyc”, will an llm with a weather tool know “I need to ask for weather so I can make a better packing list”, while an llm without weather tool would either make list without actual weather info OR your application would need to support the LLM asking “tell me what the weather is” and your application would need to parse that and then spit back in the answer in a chained response? If so, seems like tools are helpful in letting LLM drive a bit more, right?

Eisenstein 8 days ago | parent [-]

If you have a weather tool available it will be in a list of available tools, and the LLM may or may not ask to use it; it is not certain that it will, but if it is a 'reasoning' model it probably will.

You need to be careful creating a ton of tools and displaying a list of all of them to the model since it can overwhelm them and they can go down rabbit holes of using a bunch of tools to do things that aren't particularly helpful.

Hopefully you would have specific prompts and tools that handle certain types of tasks instead of winging it and hoping for the best.