Remix.run Logo
fragmede 3 hours ago

Even with code review, a well configured CI/CD system is going to include a wealth of automated unit and integration tests, and then also a complex deploy system involving canaries and ramp-up and blue/green deployment and flags and monitoring and alerts that's backed by a pager and on-call rotation with runbooks. Code review simply will never be perfect and catch 100% of issues, so systems are designed with that in mind.

So then then question is what's actually reasonable given today's code generating tools? 0% review seems foolish but 100% seems similarly unreal. Automated code review systems like CodeRabbit are, dare I even say, reasonable as a first line of defense these days. It all comes down too developer velocity balanced with system stability. Error budgets like Google's SRE org is able to enforce against (some) services they support are one way of accomplishing that, but those are hard to put into practice.

So then, as you say, it takes an act of Congress to get anything deployed.

So in the abstract, imo it all comes down to the quality of the automated CI/CD system, and developers being on call for their service so they feel the pain of service unreliability and don't just throw code over the wall. But it's all talk at this level of abstraction. The reality of a given company's office politics and the amount of leverage the platform teams and whatever passes for SRE there have vs the rest of the company make all the difference.