Remix.run Logo
outlore 4 hours ago

i am curious: why this instead of feeding your LLM an OpenAPI spec?

jasonjmcghee 3 hours ago | parent | next [-]

It's not about the interface to make a request to a server, it's about how the client and server can interact.

For example:

When and how should notifications be sent and how should they be handled?

---

It's a lot more like LSP.

quantadev 3 hours ago | parent | next [-]

Nobody [who knows what they're doing] wants their LLM API layer controlling anything about how their clients and servers interact though.

jasonjmcghee 2 hours ago | parent | next [-]

Not sure I understand your point. If it's your client / server, you are controlling how they interact, by implementing the necessaries according to the protocol.

If you're writing an LSP for a language, you're implementing the necessaries according to the protocol (when to show errors, inlay hints, code fixes, etc.) - it's not deciding on its own.

quantadev 43 minutes ago | parent [-]

Even if I could make use of it, I wouldn't, because I don't write proprietary code that only works on one AI Service Provider. I use only LangChain so that all of my code can be used with any LLM.

My app has a simple drop down box where users can pick whatever LLM they want to to use (OpenAI, Perplexity, Gemini, Anthropic, Grok, etc)

However if they've done something worthy of putting into LangChain, then I do hope LangChain steals the idea and incorporates it so that all LLM apps can use it.

pizza 2 hours ago | parent | prev [-]

I do

quantadev 2 hours ago | parent [-]

> "who knows what they're doing".

outlore 3 hours ago | parent | prev [-]

makes sense, thanks for the explanation!

pizza 3 hours ago | parent | prev | next [-]

I think OpenAI spec function calls are to this like what raw bytes are to unix file descriptors

quotemstr 3 hours ago | parent | prev [-]

Same reason in Emacs we use lsp-mode and eglot these days instead of ad-hoc flymake and comint integrations. Plug and play.