| ▲ | niek_pas 4 hours ago | |||||||
What's even worse is that when dealing with human software teams, a vague requirement will (at least in a well-run org) receive demands for further specification. "What do you mean by 'get data'?", etc. An LLM will just say, "Sure! Here's the fully implemented code that gets the data and give it to the user. " and be done with it. | ||||||||
| ▲ | smokel 4 hours ago | parent | next [-] | |||||||
ChatGPT 5.5 responds: > What data should I retrieve, and where should I get it from? Please specify at least: ... And it then goes on to ask just exactly what is necessary, being all constructive about it. | ||||||||
| ||||||||
| ▲ | vidarh 4 hours ago | parent | prev | next [-] | |||||||
When the cycles are short enough, though, that is to some degree the right thing. That is, it's the right thing for things the users can then immediately see and give feedback on, because it lets them give feedback on something tangible. It's the wrong thing for important things under the hood (like durability and security requirements) that are not tangible to them. | ||||||||
| ▲ | resters 4 hours ago | parent | prev | next [-] | |||||||
Just as poorly designed code can still compile. This is operator error, not a failure of the technology. | ||||||||
| ▲ | pydry 4 hours ago | parent | prev [-] | |||||||
IME you give it very precise specifications and it still fucks it up. When we talk about "the" bottleneck being specs it just isnt the case that it's the only thing LLMs do poorly. Theyre really bad at a lot of stuff in the SDLC. They're also good at providing results which are bad but look ok if you either dont look too closely or dont know what you're looking for. | ||||||||