| ▲ | wilkystyle 14 hours ago |
| Can you elaborate? Fairly new to langchain, but didn't realize it had any sort of stereotypical type of user. |
|
| ▲ | int_19h 11 hours ago | parent | next [-] |
| I'll admit that I haven't looked it in a while, but as originally released, it was a textbook example on how to complicate a fundamentally simple and well-understood task (text templates, basically) with lots of useless abstractions that made it all sound more "enterprise". People would write complicated langchains, but then when you looked under the hood all it was doing is some string concatenation, and the result was actually less readable than a simple template with substitutions in it. |
| |
| ▲ | phyzome 5 hours ago | parent | next [-] | | Huh, kind of sounds like they used LLMs to design it. :-) | |
| ▲ | gcr 10 hours ago | parent | prev [-] | | What do you suggest instead? Handrolled code with “import openai”? BAML? | | |
| ▲ | hhh 2 hours ago | parent | next [-] | | yes, in an industry that has rapidly changing features and 7000 of these products that splinter and lose user base so quickly you should write your own orchestration for this stuff. It’s not hard and gives you a very easy path to implementing new features or optimizations. | |
| ▲ | llmslave2 6 hours ago | parent | prev [-] | | Oh gosh, not that legacy "hand rolled code" |
|
|
|
| ▲ | XCSme 13 hours ago | parent | prev | next [-] |
| I am not sure what's the stereotype, but I tried using langchain and realised most of the functionality actually adds more code to use than simply writing my own direct API LLM calls. Overall I felt like it solves a problem doesn't exist, and I've been happily sending direct API calls for years to LLMs without issues. |
| |
| ▲ | teruakohatu 12 hours ago | parent | next [-] | | JSON Structured Output from OpenAI was released a year after the first LangChain release. I think structured output with schema validation mostly replaces the need for complex prompt frameworks. I do look at the LC source from time to time because they do have good prompts backed into the framework. | | |
| ▲ | avaer 4 hours ago | parent | next [-] | | To this day many good models don't support structured outputs (say Opus 4.5) so it's not a panacea you can count on in production. The bigger problem is that LangChain/Python is the least set up to take advantage of strong schemas even when you do have it. Agree about pillaging for prompts though. | | |
| ▲ | teruakohatu 17 minutes ago | parent [-] | | > so it's not a panacea you can count on in production. OpenAI and Gemini models can handle ridiculously complicated and convoluted schemas, if I needed complicated JSON output I wouldn’t use anything that didn’t guarantee it. I have pushed Gemini 2.5 Pro further than I thought possible when it comes to ridiculously over complicated (by necessity) structured output. |
| |
| ▲ | majormajor 12 hours ago | parent | prev [-] | | IME you could get reliable JSON or other easily-parsable output formats out of OpenAI's going back at least to GPT3.5 or 4 in early 2023. I think that was a bit after LangChain's release but I don't recall hitting problems that I needed to add a layer around in order to do "agent"-y things ("dispatch this to this specialized other prompt-plus-chatgpt-api-call, get back structured data, dispatch it to a different specialized prompt-plus-chatgpt-api-call") before it was a buzzword. | | |
| ▲ | nostrebored 7 hours ago | parent [-] | | Can guarantee this was not true for any complicated extraction. You could reliably get it to output json but not the json you wanted Even on smallish ~50k datasets error was still very high and interpretation of schema was not particularly good. | | |
| ▲ | avaer 4 hours ago | parent [-] | | It's still not true for any complicated extraction. I don't think I've ever shipped a successful solution to anything serious that relied on freeform schema say-and-pray with retries. |
|
|
| |
| ▲ | Insanity 11 hours ago | parent | prev [-] | | When my company organized an LLM hackathon last year, they pushed for LangChain.. but then instead of building on top of it I ended up creating a more lightweight abstraction for our use-cases. That was more fun than actually using it. |
|
|
| ▲ | prodigycorp 12 hours ago | parent | prev [-] |
| No dig at you, but I take the average langchain user as one who is either a) using it because their C-suite heard about at some AI conference and had it foisted upon them or b) does not care about software quality in general. I've talked to many people who regret building on top of it but they're in too deep. I think you may come to the same conclusions over time. |
| |
| ▲ | inlustra 12 hours ago | parent | next [-] | | Great insight that you wouldn’t get without HN, thank you! What would you and your peers recommend? | | |
| ▲ | baobabKoodaa 11 hours ago | parent | next [-] | | LangChain does not solve any actual problem, so there is no need to replace it with anything. Just build without it. | |
| ▲ | peab 10 hours ago | parent | prev | next [-] | | There's a great talk called Pydantic is all you need that i highly recommend | |
| ▲ | sumitkumar 11 hours ago | parent | prev [-] | | pydantic/pydanticAI in builder mode or llamaindex in solution architect mode. |
| |
| ▲ | wilkystyle 7 hours ago | parent | prev [-] | | Thanks for the reply, and no offense taken. I've inherited some code that uses LangChain, and this is my first experience with it. |
|