|
| ▲ | p1necone 6 hours ago | parent | next [-] |
| They might have tried, but this would be pretty hard to achieve for real - especially for the older/worse models. For changes that do more than alter a couple of lines llm output can be very obvious. Stripping all comments from the changeset might go a long way to making it more blind, but then you're missing context that you kinda need to review the code properly. |
|
| ▲ | yorwba 7 hours ago | parent | prev [-] |
| The comment you're replying to is talking about a hypothetical scenario. In any case, the blinding didn't stop Reviewer #2 from calling out obvious AI slop. (Figure 5) |
| |
| ▲ | collabs 7 hours ago | parent [-] | | I feel like I don't have the context for this conversation. If slop is obvious as slop, I feel like we should block it. If you look at the comment it says what the code following the comment does. It doesn't matter whether it is a human or a machine that wrote it. It is useless. It is actually worse than useless because if someone needs to change the code, now they need to change two things. So in that sense, you just made twice the work for anyone who touches the code after you and for what benefit? | | |
| ▲ | zozbot234 6 hours ago | parent [-] | | The point is that AI models do these kinds of things all the time. They're not really all that smart or intelligent, they just replicate patterns or boilerplate and then iterate until it sort of appears to work properly. | | |
| ▲ | spartanatreyu 6 hours ago | parent [-] | | > appears to work That "appears" is doing a lot of heavy lifting. The code working isn't what's being selected for. The code looking convincing IS what is being selected for. That distinction is massive. |
|
|
|