| ▲ | mehagar 13 hours ago | |
I would normally agree, but I think the "code is a liability" quote assumes that humans are reading and modifying the code. If AI tools are also reading and modifying their own code, is that still true? | ||
| ▲ | qudat 18 minutes ago | parent | next [-] | |
What happens when there’s a service outage and you cannot debug code without an agent? | ||
| ▲ | OptionOfT 13 hours ago | parent | prev [-] | |
You have to be able to express the change you want in natural language. This is not always possible due to ambiguity. Next to that, eventually you run into the same issue that we humans run into: no more context windows. But we as software engineers have learned to abstract away components, to reduce the cognitive load when writing code. E.g., when you write file you don't deal with syscalls anymore. This is different with AI. It doesn't abstract away things, which means you requesting a change might make the AI make a LOT of changes to the same pattern, but this can cause behavior to change in ways you haven't anticipated, haven't tested, or haven't seen yet. And because it's so much code to review, it doesn't get the same scrutiny. | ||