Remix.run Logo
babyshake 8 hours ago

Would you say the same for Mastra? If so, what would you say indicates a high quality candidate when they are discussing agent harnessing and orchestration?

avaer 5 hours ago | parent [-]

I somewhat take issue as a LangChain hater + Mastra lover with 20+ years of coding experience and coding awards to my name (which I don't care about, I only mention it for context).

Langchain is `left-pad` -- a big waste of your time, and Mastra is Next.js -- mostly saving you infrastructure boilerplate if you use it right.

But I think the primary difference is that Python is a very bad language for agent/LLM stuff (e.g. static typesystem, streaming, isomorphic code, strong package management ecosystem is what you want, all of which Python is bad with). And if for some ungodly reason you had to do it in Python, you'd avoid LangChain anyway so you could bolt on strong shim layers to fix Python's shortcomings in a way that won't break when you upgrade packages.

Yes, I know there's LangChain.js. But at that point you might as well use something that isn't a port from Python.

> what would you say indicates a high quality candidate when they are discussing agent harnessing and orchestration?

Anything that shows they understand exactly how data flows through the system (because at some point you're gonna be debugging it). You can even do that with LangChain, but then all you'd be doing is complaining about LangChain.

runeblaze 19 minutes ago | parent | next [-]

> And if for some ungodly reason you had to do it in Python

I literally invoke sglang and vllm in Python. You are supposed to (if not using them over-the-network) use the two fastest inference engines there is via Python.

stingraycharles 2 hours ago | parent | prev [-]

Python being a very bad language for LLM stuff is a hot take I haven’t heard before. Your arguments sound mostly like personal preferences that apply to any problem, not just agentic / LLM.

If we’re going to throw experience around, after 30+ years of coding experience, I really don’t care too much anymore as long as it gets the job done and it doesn’t get in the way.

LangChain is ok, LangGraph et al I try to avoid like the plague as it’s too “framework”-ish and doesn’t compose well with other things.

avaer an hour ago | parent [-]

I used to write web apps in C++, so I totally understand not caring if it gets the job done.

I guess the difference where I draw the line is that LLMs are inherently random I/O so you have to treat them like UI, or the network, where you really have no idea what garbage is gonna come in and you have to be defensive if you're going to build something complex -- otherwise, you as a programmer will not be able to understand or trust it and you will get hit by Murphy's law when you take off your blinders. (if it's simple or a prototype nobody is counting on, obviously none of this matters)

To me insisting that stochastic inputs be handled in a framework that provides strong typing guarantees is not too different from insisting your untrusted sandbox be written in a memory safe language.

stingraycharles 13 minutes ago | parent [-]

What does static type systems provide you with that, say, using structured input / output using pydantic doesn’t?

I just don’t follow your logic of “LLMs are inherently random IO” (ok, I can somehow get behind that, but structured output is a thing) -> “you have to treat them like UI / network” (ok, yes, it’s untrusted) -> static typing solves everything (how exactly?)

This just seems like another “static typing is better than dynamic typing” debate which really doesn’t have a lot to do with LLMs.