Remix.run Logo
niek_pas 4 hours ago

What's even worse is that when dealing with human software teams, a vague requirement will (at least in a well-run org) receive demands for further specification. "What do you mean by 'get data'?", etc.

An LLM will just say, "Sure! Here's the fully implemented code that gets the data and give it to the user. " and be done with it.

smokel 4 hours ago | parent | next [-]

ChatGPT 5.5 responds:

> What data should I retrieve, and where should I get it from? Please specify at least: ...

And it then goes on to ask just exactly what is necessary, being all constructive about it.

airstrike 4 hours ago | parent [-]

You're both right. The parent was a toy example, and if asked literally to an LLM, it will definitely ask for more information. Yes, it's important to be accurate but I don't think that applies here.

But the point still stands: in most contexts, the LLM will fill in the blanks with what it deems appropriate like an overconfident intern at best and a bull in a China shop at worst.

vidarh 4 hours ago | parent | prev | next [-]

When the cycles are short enough, though, that is to some degree the right thing. That is, it's the right thing for things the users can then immediately see and give feedback on, because it lets them give feedback on something tangible.

It's the wrong thing for important things under the hood (like durability and security requirements) that are not tangible to them.

resters 4 hours ago | parent | prev | next [-]

Just as poorly designed code can still compile. This is operator error, not a failure of the technology.

pydry 4 hours ago | parent | prev [-]

IME you give it very precise specifications and it still fucks it up.

When we talk about "the" bottleneck being specs it just isnt the case that it's the only thing LLMs do poorly. Theyre really bad at a lot of stuff in the SDLC.

They're also good at providing results which are bad but look ok if you either dont look too closely or dont know what you're looking for.