| ▲ | Cthulhu_ 4 hours ago | ||||||||||||||||
Anything that goes to production should have a 4-6+ eyes rule, at least one reviewer that can review the changes in isolation. If tools or LLMs can help them with it then that's fine, but it should always be at least two humans involved, one making changes, one verifying, and if something like this happens, they're both culpable. Not that they should be blamed for it per se, but the process and their way of working should be reviewed. | |||||||||||||||||
| ▲ | dawnerd 3 hours ago | parent | next [-] | ||||||||||||||||
I cringe whenever someone suggests to just have an agent review because “it knows code better”. An ai agent wouldn’t catch a lot of things a human would flag. And before someone goes you just need to prompt it better, that’s a huge amount of work for large projects and you’re still essentially begging it to do what you want. | |||||||||||||||||
| |||||||||||||||||
| ▲ | doctorwho42 4 hours ago | parent | prev [-] | ||||||||||||||||
The problem is that humans inherently fill in data in what the process from the world. Our brain is designed to fill in gaps, it's why memory is so blurry when it comes to reciting the facts of what we saw in a trial. It's why you could swear you saw "x" in the production software you were about to push. But it really comes down to expectations - and those expectations help reduce cognitive load/increase cognitive efficiency (resource usage). So after more and more people get used using AI, you will see these mistakes occur more frequently. B/c it's how our brains work. | |||||||||||||||||