▲ | jasonjmcghee 4 hours ago | ||||||||||||||||
Not sure I understand your point. If it's your client / server, you are controlling how they interact, by implementing the necessaries according to the protocol. If you're writing an LSP for a language, you're implementing the necessaries according to the protocol (when to show errors, inlay hints, code fixes, etc.) - it's not deciding on its own. | |||||||||||||||||
▲ | quantadev 3 hours ago | parent [-] | ||||||||||||||||
Even if I could make use of it, I wouldn't, because I don't write proprietary code that only works on one AI Service Provider. I use only LangChain so that all of my code can be used with any LLM. My app has a simple drop down box where users can pick whatever LLM they want to to use (OpenAI, Perplexity, Gemini, Anthropic, Grok, etc) However if they've done something worthy of putting into LangChain, then I do hope LangChain steals the idea and incorporates it so that all LLM apps can use it. | |||||||||||||||||
|