| ▲ | wmeredith 9 hours ago | ||||||||||||||||||||||
> Agentic coding is the future, but people have not yet adapted. We went from punch cards to assembly to FORTRAN to C to JavaScript; each step adding more abstractions. I don't completely disagree (I've argued the same point myself). But one critical difference between the LLM layer and all of those others you listed, is that LLMs are non-deterministic and all those other layers are. I'm not sure how that changes the dynamic, but surely it does. | |||||||||||||||||||||||
| ▲ | CharlieDigital 8 hours ago | parent [-] | ||||||||||||||||||||||
The LLM can be non-deterministic, but in the end, as long as we have compilers and integration tests, isn't it the same? You go from non-deterministic human interpretation of requirements and specs into a compiled, deterministic state machine. Now you have a non-deterministic coding agent doing the same and simply replacing the typing portion of that work. So long as you supply the agent well-curated set of guidance, it should ultimately produce more consistent code with higher quality than if the same task were given to a team of random humans of varying skill and experience levels. The key now is how much a team invests in writing the high quality guidance in the first place. | |||||||||||||||||||||||
| |||||||||||||||||||||||