| ▲ | enigmoid 3 hours ago | |||||||
> only a skilled developer who's thinking critically, and comfortable operating at the architectural level, can spot issues in the thousands of lines of generated code, before they become a problem. An additional factor: to find issues in generated code, the developer has to care. Many developers (especially at big firms) are already profoundly checked out from their work and are just looking for a way to close their tickets and pass the buck with the minimum possible effort. Those developers - even the capable ones - aren't going to put in the effort to understand their generated code well enough to find issues that the agents missed. Especially during the current AI-driven speed mania. | ||||||||
| ▲ | lgrapenthin 2 hours ago | parent | next [-] | |||||||
Indeed. Generated code is also harder to read because it violates all semantic expectations that rely on the mental model of a human author. A generated piece of code is linguistically plausible but often unknowingly imitates common idioms so incoherently that the actual bug may be accidentally disguised in a way no sane human (even a bad programmer) could have come up with. Since LLMs have no internal evaluation, as a reviewer one has to account for it and evaluate line by line, rebuild from scratch any hidden rationale and tacit knowledge the LLM didn't have in the first place - only to be mislead into non concerns draining costly hours. At this point, the investment is often deeper than writing from scratch. | ||||||||
| ||||||||
| ▲ | awakeasleep 3 hours ago | parent | prev [-] | |||||||
There are exceptions to this, but in big firms many developers on many teams are actually punished for caring. | ||||||||