| ▲ | suyavuz 4 hours ago |
| People become so lazy after ai. Even they don't check what they commit. |
|
| ▲ | Cthulhu_ 4 hours ago | parent | next [-] |
| Anything that goes to production should have a 4-6+ eyes rule, at least one reviewer that can review the changes in isolation. If tools or LLMs can help them with it then that's fine, but it should always be at least two humans involved, one making changes, one verifying, and if something like this happens, they're both culpable. Not that they should be blamed for it per se, but the process and their way of working should be reviewed. |
| |
| ▲ | dawnerd 3 hours ago | parent | next [-] | | I cringe whenever someone suggests to just have an agent review because “it knows code better”. An ai agent wouldn’t catch a lot of things a human would flag. And before someone goes you just need to prompt it better, that’s a huge amount of work for large projects and you’re still essentially begging it to do what you want. | | |
| ▲ | throwatdem12311 3 hours ago | parent [-] | | I have not encountered anything more soulcrushing in my entire career than having to spend hours going over LLM generated slop that was vomitted out by a contractor in Pakistan that doesn’t give a shit, to only have the review itself be fed in as a re-prompt, and get the same 2000 line ball of spaghetti back with even more issues and going back and forth until I just give up and approve it. No, AI code review doesn’t help. Claude can’t even give me correct line numbers 80% of the time, literally just makes them up, and more than half of it is false positive BS anyway. | | |
| ▲ | dawnerd 3 hours ago | parent [-] | | Yep I’ve had to approve bad code too due to timelines and now our codebase has so much tech debt it doesn’t even matter anymore. And worse, as new people work on the code the LLMs pick up the bad code and it’s been spiraling from there. |
|
| |
| ▲ | doctorwho42 4 hours ago | parent | prev [-] | | The problem is that humans inherently fill in data in what the process from the world. Our brain is designed to fill in gaps, it's why memory is so blurry when it comes to reciting the facts of what we saw in a trial. It's why you could swear you saw "x" in the production software you were about to push. But it really comes down to expectations - and those expectations help reduce cognitive load/increase cognitive efficiency (resource usage). So after more and more people get used using AI, you will see these mistakes occur more frequently. B/c it's how our brains work. |
|
|
| ▲ | sharts 3 hours ago | parent | prev [-] |
| Thu don’t check because the expectations are now to commit and merge often coming from higher ups. |