Remix.run Logo
XCSme 13 hours ago

I am not sure what's the stereotype, but I tried using langchain and realised most of the functionality actually adds more code to use than simply writing my own direct API LLM calls.

Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues.

teruakohatu 12 hours ago | parent | next [-]

JSON Structured Output from OpenAI was released a year after the first LangChain release.

I think structured output with schema validation mostly replaces the need for complex prompt frameworks. I do look at the LC source from time to time because they do have good prompts backed into the framework.

avaer 4 hours ago | parent | next [-]

To this day many good models don't support structured outputs (say Opus 4.5) so it's not a panacea you can count on in production.

The bigger problem is that LangChain/Python is the least set up to take advantage of strong schemas even when you do have it.

Agree about pillaging for prompts though.

teruakohatu 15 minutes ago | parent [-]

> so it's not a panacea you can count on in production.

OpenAI and Gemini models can handle ridiculously complicated and convoluted schemas, if I needed complicated JSON output I wouldn’t use anything that didn’t guarantee it.

I have pushed Gemini 2.5 Pro further than I thought possible when it comes to ridiculously over complicated (by necessity) structured output.

majormajor 12 hours ago | parent | prev [-]

IME you could get reliable JSON or other easily-parsable output formats out of OpenAI's going back at least to GPT3.5 or 4 in early 2023. I think that was a bit after LangChain's release but I don't recall hitting problems that I needed to add a layer around in order to do "agent"-y things ("dispatch this to this specialized other prompt-plus-chatgpt-api-call, get back structured data, dispatch it to a different specialized prompt-plus-chatgpt-api-call") before it was a buzzword.

nostrebored 7 hours ago | parent [-]

Can guarantee this was not true for any complicated extraction. You could reliably get it to output json but not the json you wanted

Even on smallish ~50k datasets error was still very high and interpretation of schema was not particularly good.

avaer 4 hours ago | parent [-]

It's still not true for any complicated extraction. I don't think I've ever shipped a successful solution to anything serious that relied on freeform schema say-and-pray with retries.

Insanity 11 hours ago | parent | prev [-]

When my company organized an LLM hackathon last year, they pushed for LangChain.. but then instead of building on top of it I ended up creating a more lightweight abstraction for our use-cases.

That was more fun than actually using it.