| ▲ | postflopclarity 4 days ago |
| I do this all the time. I pass my code into "you are a skeptic and hate all the code my student produces: here is their latest PR etc.. etc.." |
|
| ▲ | osn9363739 4 days ago | parent [-] |
| I have devs that do this and we have CI AI code review. Problem is, it always finds something. So the devs that have been in the code base for a while know what to ignore, the new devs get bogged down by research. It's a net benefit as it forces them to learn, which they should be doing. It def slows them down though which goes against some of what I see about the productivity boost claims. A human reviewer with the codebase experience is still needed. |
| |
| ▲ | mywittyname 4 days ago | parent | next [-] | | Slowing down new developers by forcing them to understand the product and context better is a good thing. I do agree that the tool we use (code rabbit) is a little too nitpicky, but it's right way more than it's wrong. | |
| ▲ | bee_rider 4 days ago | parent | prev [-] | | I don’t use any of these sorts of tools, so sorry for the naive questions… What sort of thing does it find? Bad smells (possibly known imperfections but least-bad-picks), bugs (maybe triaged), or violations of the coding guides (maybe known and waivered)? I wonder if there’s a need for something like a RAG of known issues… | | |
| ▲ | baq 4 days ago | parent [-] | | GPT 5+ high+ review bots find consistently good issues on average for me, sometimes they’re bogus, but sometimes they’re really, really good finds. I was impressed more than once. | | |
|
|