Remix.run Logo
mlhpdx 8 hours ago

Wow. Their experience could not be more different than mine. As I’m contemplating the first year of my startup I’ve tallied 6000 deployments and 99.997 percent uptime and a low single digit rollback percentage (MTTR in low single digit minutes and fractional, single cell impact for them so far). While I’m sure it’s possible for a solo entrepreneur to hit numbers like that with a monolith I have never done so, and haven’t see others do so.

Edit: I’d love to eat the humble pie here. If you have examples of places where monoliths are updated 10-20 times a day by a small (or large) team post the link. I’ll read them all.

AlotOfReading 5 hours ago | parent [-]

The idea of deploying to production 10-20 times per day sounds terrifying. What's the rationale for doing so?

I'll assume you're not writing enough bugs that customers are reporting 10-20 new ones per day, but that leaves me confused why you would want to expose customers to that much churn. If we assume an observable issue results in a rollback and you're only rolling back 1-2% of the time (very impressive), once a month or so customers should experience observable issues across multiple subsequent days. That would turn me off making a service integral to my workflow.

mlhpdx 4 minutes ago | parent | next [-]

Speed is the rationale. I have zero hesitation to deploy and am extremely well practiced at decomposing changes into a series of small safe changes at this point. So maybe it's a single spelling correction, or perhaps it's the backend for a new service integration -- it's all the same to me.

Churn is kind of a loaded word, I'd just call it change. Improvements, efficiencies, additions and yes, of course, fixes.

It may be a little unfair to compare monoliths with distributed services when it comes to deployments. I often deploy three services (sometimes more) to implement a new feature, and that wouldn't be the case with a monolith. So 100% there is a lower number of deploys needed in that world (I know, I've been there). Unfortunately, there is also a natural friction that prevents deploying things as they become available. Google called that latency out in DORA for a reason.

et1337 3 hours ago | parent | prev [-]

If something is difficult or scary, do it more often. Smaller changes are less risky. Code that is merged but not deployed is essentially “inventory” in the factory metaphor. You want to keep inventory low. If the distance between the main branch and production is kept low, then you can always feel pretty confident that the main branch is in a good state, or at least close to one. That’s invaluable when you inevitably need to ship an emergency fix. You can just commit the fix to main instead of trying to find a known good version and patching it. And when a deployment does break something, you’ll have a much smaller diff to search for the problem.

AlotOfReading 3 hours ago | parent [-]

There's a lot of middle ground between "deploy to production 20x a day" and "deploy so infrequently that you forget how to deploy". Like, once a day? I have nothing against emergency fixes, unless you're doing them 9-19x a day. Hotfixes should be uncommon (neither rare nor standard practice).