| ▲ | dijksterhuis 2 hours ago | |||||||
in a word, maintainability > maintainability is inversely proportional to the amount of time it takes a developer to make a change and the risk that change will break something https://softwareengineering.stackexchange.com/a/134863 i could be wrong, but i'm pretty sure that end-users get upset when a change takes a long time or it ends up breaking something for them. just because people are finding that agents or whatever are speeding changes up now doesn't necessarily mean they won't encounter a slow-down later when the codebase becomes an un-maintainable mess. technical debt is always a thing, even with machines doing the work (the agent/machine still has to parse a codebase to make changes). | ||||||||
| ▲ | raw_anon_1111 43 minutes ago | parent [-] | |||||||
What makes you think that AI couldn’t make the same changes without breaking it whether you modify the code or not? And you do have automated unit tests don’t you? Right now I have a 5000 line monolithic vibe coded internal website that is at most going to be used by 3 people. It mixes Python, inline CSS and Javasript with the API. I haven’t looked at a line of code. My IAM permissions for the Lambda runtime has limited permissions (meaning the code can’t do anything that the permissions won’t allow it to). I used AWS Cognito for authorization and validated the security of the endpoints and I validated the permissions of the database user. Neither Claude nor Codex have any issues adding pages, features and API endpoints without breaking changes. By definition, coding agents are the worse they will be right now. | ||||||||
| ||||||||