| ▲ | hnthrow0287345 4 days ago |
| >It's even worse than that: non-junior devs are doing it as well. This might be unpopular, but that is seeming more like an opportunity if we want to continue allowing AI to generate code. One of the annoying things engineers have to deal with is stopping whatever they're doing and doing a review. Obviously this gets worse if more total code is being produced. We could eliminate that interruption by having someone doing more thorough code reviews, full-time. Someone who is not being bound by sprint deadlines and tempted to gloss over reviews to get back to their own work. Someone who has time to pull down the branch and actually run the code and lightly test things from an engineer's perspective so QA doesn't hit super obvious issues. They can also be the gatekeeper for code quality and PR quality. |
|
| ▲ | marcosdumay 4 days ago | parent | next [-] |
| A full-time code reviewer will quickly lose touch with all practical matters and steer the codebase into some unmaintainable mess. This is not the first time somebody had that idea. |
| |
| ▲ | JohnBooty 4 days ago | parent | next [-] | | I've often thought this could work if the code reviewer was full-time, but rotated regularly. Just like a lot of jobs do with on-call weeks, or weeks spent as release manager - like if you have 10 engineers, and once every ten weeks it's your turn to be on call. That would definitely solve the "code reviewer loses touch with reality" issue. Whether it would be a net reduction in disruption, I don't know. | | |
| ▲ | necovek 4 days ago | parent | next [-] | | Doing code review as described (actually diving deep, testing etc) for 10 engineers producing code is likely not going to be feasible unless they are really slow. In general, back in 2000s, a team I was on employed a simple rule to ensure reviews happen in a timely manner: once you ask for a review, you have an obligation to do 2 reviews (as we required 2 approvals on every change). The biggest problem was when there wasn't stuff to review, so you carried "debt" over, and some never repaid it. But with a team of 15-30 people, it worked surprisingly well: no interrupts, quick response times. It did require writing good change descriptions along with testing instructions. We also introduced diff size limits to encourage iterative development and small context when reviewing (as obviously not all 15-30 people had same deep knowledge of all the areas). | |
| ▲ | bee_rider 4 days ago | parent | prev [-] | | You could do some interesting layering strategies if you made it half time, for two people. Or maybe some staggered approach: each person does half time, full time, then half time again, with there people going through the sequence at a time. Make each commit require two sign-offs, and you could get a lot of review and maybe even induce some cooperation… | | |
| ▲ | kaffekaka 4 days ago | parent [-] | | "Interesting" is the word I would use as well, but also cumbersome and complicated. |
|
| |
| ▲ | jaggederest 4 days ago | parent | prev | next [-] | | I think it's amenable if you make code review a primary responsibility, but not the only responsibility. I think this is a big thing at staff+ levels, doing more than your share of code review (and other high level concerns, of course). | |
| ▲ | kragen 4 days ago | parent | prev [-] | | Linus Torvalds is effectively a full-time code reviewer, and so are most of his "lieutenants". It's not a new idea, as you say, but it works very well. |
|
|
| ▲ | sorokod 4 days ago | parent | prev | next [-] |
| > One of the annoying things engineers have to deal with is stopping whatever they're doing and doing a review. I would have thought that reviewing PRs and doing it well is in the job description. You latter mention "someone" a few times - who that someone might be? |
| |
| ▲ | bee_rider 4 days ago | parent [-] | | Can we make an LLM do it? “You are a cranky senior software engineer who loves to nitpick change requests. Here are your coding standards. You only sign off of a change after you are sure it works; if you run out of compute credits before you can prove it to yourself, reject the change as too complex.” Balance things, pit the LLMs against each other. | | |
| ▲ | postflopclarity 4 days ago | parent | next [-] | | I do this all the time. I pass my code into "you are a skeptic and hate all the code my student produces: here is their latest PR etc.. etc.." | | |
| ▲ | osn9363739 4 days ago | parent [-] | | I have devs that do this and we have CI AI code review. Problem is, it always finds something. So the devs that have been in the code base for a while know what to ignore, the new devs get bogged down by research. It's a net benefit as it forces them to learn, which they should be doing. It def slows them down though which goes against some of what I see about the productivity boost claims. A human reviewer with the codebase experience is still needed. | | |
| ▲ | mywittyname 4 days ago | parent | next [-] | | Slowing down new developers by forcing them to understand the product and context better is a good thing. I do agree that the tool we use (code rabbit) is a little too nitpicky, but it's right way more than it's wrong. | |
| ▲ | bee_rider 4 days ago | parent | prev [-] | | I don’t use any of these sorts of tools, so sorry for the naive questions… What sort of thing does it find? Bad smells (possibly known imperfections but least-bad-picks), bugs (maybe triaged), or violations of the coding guides (maybe known and waivered)? I wonder if there’s a need for something like a RAG of known issues… | | |
| ▲ | baq 3 days ago | parent [-] | | GPT 5+ high+ review bots find consistently good issues on average for me, sometimes they’re bogus, but sometimes they’re really, really good finds. I was impressed more than once. | | |
|
|
| |
| ▲ | jjmarr 4 days ago | parent | prev | next [-] | | We do this at work and it's amazing. | |
| ▲ | cm2012 4 days ago | parent | prev [-] | | This would probably catch a lot of errors |
|
|
|
| ▲ | immibis 4 days ago | parent | prev | next [-] |
| As I read once: all that little stuff that feels like it stops you from doing your job is your job. |
|
| ▲ | gottagocode 4 days ago | parent | prev | next [-] |
| > We could eliminate that interruption by having someone doing more thorough code reviews, full-time. Someone who is not being bound by sprint deadlines and tempted to gloss over reviews to get back to their own work. This is effectively my role (outside of mentoring) as a lead developer over a team of juniors we train in house. I'm not sure many engineers would enjoy a day of only reviewing, me included. |
|
| ▲ | jms703 4 days ago | parent | prev | next [-] |
| >One of the annoying things engineers have to deal with is stopping whatever they're doing and doing a review Code reviews are a part of the job. Even at the junior level, an engineer should be able to figure out a reasonable time to take a break and shift efforts for a bit to handle things like code reviews. |
|
| ▲ | mywittyname 4 days ago | parent | prev | next [-] |
| This sounds good in theory, but in practice, a person capable of doing a good job at this role would also be a good developer whose impact would be greater if they were churning out code. This is a combination of a lead engineer and SDET. In reality, this ends up being the job given to the weakest person on the team to keep them occupied. And it gives the rest of the team a mechanism to get away with shoddy work and not face repercussions. Maybe I'm just jaded, but I think this approach would have horrible results. AI code review tools are already good. That makes for a good first pass. On my team, fixing Code Rabbit's issues, or having a good reason not to is always step 1 to a PR. It catches a lot of really subtle bugs. |
|
| ▲ | shiandow 4 days ago | parent | prev | next [-] |
| The best way I have of knowing code is correct on all levels is convincing myself I would write it the same way. Thr only way to be 100% sure is writing it myself. If I know some one reasonable managed to write the code I can usually take some shortcuts and only look at the code style, common gotchas etc. Of course it wouldn't be the first time I made some erroneous assumptions about how well considered the code was. But if none of the code is the product of any intelligent thought well, I might as well stop reading and start writing. Reading code is 10x harder than writing it after all. |
| |
| ▲ | necovek 3 days ago | parent [-] | | With time and experience, reading code becomes much easier. And well-written code is usually easy to read and understand too! The purpose of a code review is, apart from ensuring correctness, to ensure that the code that gets merged is easy to understand! And to be honest, if it's easy to understand, it's easy to ensure correctness too! The biggest challenge I had was to distinguish between explanations needed to understand the change, and explanations needed to understand the code after it was merged in. And making it clear in my code review questions that whatever question I have, I need code and comments in the code to answer them, not the author to explain it to me (I frequently have already figured out the why, but took me longer than needed): it's not because I did not get it, it's because it should be clearer (finding the right balance between asking explicitly, offering a suggestion, or pitting it as a question to prompt some thinking is non-trivial too). |
|
|
| ▲ | Spoom 4 days ago | parent | prev | next [-] |
| Big companies would outsource this position within a year, I guarantee it. It's highly measurable which means it can be "optimized". |
|
| ▲ | nunez 4 days ago | parent | prev | next [-] |
| This sounds like what unit tests after every commit and e2e tests before every PR are supposed to solve. |
|
| ▲ | ohwaitnvm 4 days ago | parent | prev [-] |
| So pair programming? |
| |
| ▲ | sodapopcan 4 days ago | parent | next [-] | | Yep, eliminates code reviews altogether. Unfortunately it remains wisely unpopular with perle even saying “AI” can be the pair. | | |
| ▲ | gaigalas 4 days ago | parent [-] | | It does not eliminate code reviews. In practice, you should have at least one independent reviewer who did not actively worked on the PR. That reviewer should also download the entire code, run it, make tests fail and so on. In my experience, it's also good that this is not a fixed role "the reviewer", and a responsability everyone in the team shares (your next task should always be: review someone else's work, only pick a new thing to do if there is nothing to review). This practice increases quality dramatically. | | |
| ▲ | sodapopcan 4 days ago | parent [-] | | > It does not eliminate code reviews. Yes it does. There are many ways to do things, of course, and you can institute that there must be an independent reviewer, but I see this is a colossal waste of time and takes away one of the many benefits of pairing. Switch pairs frequently, and by frequently I really mean "daily," and there is no need for review. This also covers "no fixed responsibilities" you mentioned (which I absolutely agree with). Again, there are no rules for how things must be done, but this is my experience of three straight years working this way and it was highly effective. | | |
| ▲ | gaigalas 4 days ago | parent | next [-] | | Mixed-level pairs (senior/junior), for example, are more about mentoring than reviewing. Those sessions do not qualify for "more than one pair of eyes". Excited (or maybe even stubborn) developers can often win their pairs by exhaustion, leading to "whatever you want" low effort contributions. Pairs tend to under-document. They share an understanding they developed during the pairing session and forget to add important information or details to the PR or documentation channels. I'm glad it has been working for you. Maybe you work in a stellar team that doesn't have those issues. However, there are many scenarios that benefit a lot from an independent reviewer. | | |
| ▲ | sodapopcan 3 days ago | parent [-] | | The problem with most people's experience with pairing is that they are given no guidance on how it should be done outside of "just work with someone." In terms of under-documentation, I didn't really find that. Most of my jobs have either been way under-documented, way over-documented (no one reads it and it gets outdated). Again I'll say that I find switching pairs daily is key with no one person stay on for more than 2 days in a row (so if a story takes three days to complete, the two people who finished it are not the same two who started it). This keeps that internal knowledge well-spread. But you're right, if you're dealing with overly excited/stubborn folk who refuse to play ball, that's obviously not going to work. Conversely, if you have a trusting team and someone is having one of those days where they feel useless and unmotivated, pairing can turn it into some kind of useful day. This could be because your colleague gets you excited or, if you have high trust, I've literally said and had people say, "I'm not feeling it today... can you drive all day?" and you're still able to offer good support as a navigator and not have a total write-off of a day. To the point of improving an otherwise bad day, one of the more interesting arguments against pairing I've heard is that companies use it to make sure everyone is always working. That would indeed be sad if that was the motivation. |
| |
| ▲ | necovek 3 days ago | parent | prev [-] | | Pairing people together on a single task makes that task get done faster, with higher quality. However, when paired together, the people still pick up some of the same biases and hold the same assumptions and context, so it is really worse than having a single author + independent reviewer. So: single author, no review < pair programming, no review < single author + review < pair programming + review
| | |
| ▲ | sodapopcan 3 days ago | parent [-] | | Again, if you rotate pairs daily this doesn't happen. You could argue that creates a "team bias" but I call that cohesion. | | |
| ▲ | necovek 3 days ago | parent [-] | | Can you elaborate? Is this pairing on small tasks that get done in (less than) a day? Or do you also have longer-running implementations (a few days, a week, or maybe even more?) where you rotate the people who are on it (so one task might have had 5 or 10 people on it during the implementation)? If the latter, that sounds very curious and I'd love to learn more about how is that working, how do people feel about it, what are the challenges you've seen, and what are the successes? It'd be great to see a write-up if you've had this running for a longer time! If it's only the former (smaller tasks), then this is where biases and assumptions stay (within the scope of that task): I still believe this is improved with an independent reviewer. | | |
| ▲ | sodapopcan 3 days ago | parent [-] | | I do have a blog post about this sitting around along with all my other mostly completed posts. Finishing this one would be great as it's a topic I'm very passionate about! Our stories were broken up to take worst-case-scenario 5 days, the ideal was 1-3, of course. Maybe would be multi-day stories so yes, if something ended up taking 5 days it was possible the whole team could have touched it (our team was only 6 devs so never more than 6). In short, there are the usual hurdles with pairing, the first one being that it takes some time to catch your stride but once you do, things start flying. It is of course not perfect and you run into scenarios where something takes longer because as a new person joins the ticket, they want to tweak the implementation, and it can get pulled in different directions the longer it takes. But this also gets better the longer you keep at it. There are lots of pros AFAIC but a couple I like: you develop shared ownership of the codebase (as well as a shared style which I think is neat), ie, every part of the app had at least a few people with lots of context so there was never any, "Oh, Alice knows the most about that part of so let's wait until she's back to fix it" kinda thing. There is no "junior work." When a junior joins the team they are always pairing so they ramp up really quickly (ideally they always drive for the first several months so they actually learn). Another downside would be it's bad for resume-driven-development. You don't "own" as many projects so you have to get creative when you're asked about big projects you've led recently, especially if you're happily in an IC role. And also yes, if there is a super small task that is too small to pair on, we'd have a reviewer, but often these were like literal one-line config changes. We'd also have an independent reviewer whenever migrations were involved or anything risky task that needed to be run. | | |
| ▲ | necovek 2 days ago | parent [-] | | Once you do publish the write-up (here's some encouragement!), hopefully it catches the attention of others so I don't miss it either :) |
|
|
|
|
|
|
| |
| ▲ | aslakhellesoy 4 days ago | parent | prev [-] | | Shhh don’t let them know! |
|