| ▲ | pagecalm 14 hours ago | |||||||
Agreed on the economics side. Clean code saves you time and money whether a human or AI wrote it. That part doesn't change. But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it. | ||||||||
| ▲ | arnitdo 4 hours ago | parent | next [-] | |||||||
> AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back You are reinventing the wheel again with yet another form of reinforcement learning. I don't use any form of LLM assistance for coding, but if I have to continually tell it what to do, how to do it, what not to do, what assumptions to make - I would rather stimulate my neurons more by doing that damn thing itself. The narrative of "Yeah it will do everything, provided you tell it how to do everything!" seems baseless, personally. Even if you emulate the smartest human possible, can you emulate an idiot? | ||||||||
| ||||||||
| ▲ | antdke 12 hours ago | parent | prev [-] | |||||||
Yup - In the end, it’s still just a tool that adheres to the steering (or lack thereof) of the user. | ||||||||