| ▲ | osn9363739 4 days ago | ||||||||||||||||
I have devs that do this and we have CI AI code review. Problem is, it always finds something. So the devs that have been in the code base for a while know what to ignore, the new devs get bogged down by research. It's a net benefit as it forces them to learn, which they should be doing. It def slows them down though which goes against some of what I see about the productivity boost claims. A human reviewer with the codebase experience is still needed. | |||||||||||||||||
| ▲ | mywittyname 4 days ago | parent | next [-] | ||||||||||||||||
Slowing down new developers by forcing them to understand the product and context better is a good thing. I do agree that the tool we use (code rabbit) is a little too nitpicky, but it's right way more than it's wrong. | |||||||||||||||||
| ▲ | bee_rider 4 days ago | parent | prev [-] | ||||||||||||||||
I don’t use any of these sorts of tools, so sorry for the naive questions… What sort of thing does it find? Bad smells (possibly known imperfections but least-bad-picks), bugs (maybe triaged), or violations of the coding guides (maybe known and waivered)? I wonder if there’s a need for something like a RAG of known issues… | |||||||||||||||||
| |||||||||||||||||