▲ | dougbright 8 days ago | |||||||||||||||||||||||||||||||
My bad. I shouldn’t have mentioned LangChain here because it’s a little besides my point. What I mean is, MCP seems designed for a world where users talk to an LLM, and the LLM calls software tools. For the foreseeable future, especially in a business context, isn’t it more likely that users will still interact with structured software applications, and the applications will call the LLM? In that case, where does MCP fit into that flow? | ||||||||||||||||||||||||||||||||
▲ | anthonypasq 8 days ago | parent | next [-] | |||||||||||||||||||||||||||||||
it separates FE and BE for agent teams just like we did with web apps. the team building your agent framework might not know the business domain of every piece of your data/api space that your agent will need to interact with. in that case, it makes sense for your differnet backend teams to also own the mcp server that your companies agent team will utilize. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | ivape 8 days ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||
Yeah I don’t know. Let’s a say a org wants to do discovery of what functions are available for an app across the org. Okay, that’s interesting. But, each team can just also import a big file called all_functions.txt. A swagger api is already kind of like an MCP, or really any existing REST api (even better because you don’t have to implement the interface). If I wanted to give my LLM brand new functionality, all I’d have to do is define out tool use for <random_api>, with zero implementation. I could also just point it to a local file and say here are the functions locally available. Remember, the big hairy secret is that all of these things just plop out a blob of text that you paste back into the LLM prompt (populating context history). That’s all these things do. Someone is going to have to unconfuse me. | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||
▲ | tomhallett 8 days ago | parent | prev [-] | |||||||||||||||||||||||||||||||
Total beginner question: if the “structured software application” gives llm prompt “plan out what I need todo for my upcoming vacation to nyc”, will an llm with a weather tool know “I need to ask for weather so I can make a better packing list”, while an llm without weather tool would either make list without actual weather info OR your application would need to support the LLM asking “tell me what the weather is” and your application would need to parse that and then spit back in the answer in a chained response? If so, seems like tools are helpful in letting LLM drive a bit more, right? | ||||||||||||||||||||||||||||||||
|