| ▲ | _heimdall 6 hours ago |
| I have worked with quite a few people committing code they didn't fully understand. I don't meant this as a drive by bazinga either, the practice of copying code or thinking you understand it when you don't is nothing new |
|
| ▲ | allajfjwbwkwja 6 hours ago | parent | next [-] |
| Pre-LLM, it was much easier for reviewers to discern that. Now, the AI-generated code can look like it was well thought out by somebody competent, when it wasn't. |
| |
| ▲ | jhide 5 hours ago | parent [-] | | Have you ever reviewed an AI-generated commit from someone with insufficient competence that was more compelling than their work would be if it was done unassisted? In my experience it’s exactly the opposite. AI-generation aggravates existing blindspots. This is because, excluding malicious incompetence, devs will generally try to understand what they’re doing if they’re doing it without AI | | |
| ▲ | bandrami 4 hours ago | parent | next [-] | | I think the issue is not that the patches are more compelling but that they're significantly larger and more frequent | |
| ▲ | allajfjwbwkwja 5 hours ago | parent | prev | next [-] | | I have. It's always more compelling in a web diff. These guys are the first coworkers for which it became absolutely necessary for me to review their work by pulling down all their code and inspecting every line myself in the context of the full codebase. | |
| ▲ | abustamam 4 hours ago | parent | prev [-] | | I try to understand what the llm is doing when it generates code. I understand that I'm still responsible for the code I commit even if it's llm generated so I may as well own it. |
|
|
|
| ▲ | enneff 4 hours ago | parent | prev [-] |
| Yes and if they copy and paste code they don’t understand then they should disclose that in the commit message too! |