Remix.run Logo
DenisM 5 hours ago

Scale-up solves a lot of problems for stable workloads. But elasticity is poor, so you either live with overprovisinoed capacity (multiples, not percentages) or fail under spiky load which often time is the most valuable moment (viral traffic, Black Friday, etc).

No one has solved this problem. Scale out is typically more elastic, at least for reads.

kragen 4 hours ago | parent | next [-]

That's a good point, but when one laptop can do 102545 transactions per second, overprovisioned capacity is kind of a more reasonable thing to use than back when you needed an Amdahl mainframe to hit 100 transactions per second.

DenisM 2 hours ago | parent [-]

As compute becomes cheaper your argument becomes more and more true.

But it only works if workloads remain fixed. If workloads grow at similar rates you’re back to the same problem.

kragen 2 hours ago | parent [-]

Well, it doesn't work for the newly added workloads. But for the most part we instead have the same workloads performed less efficiently.

masterj an hour ago | parent | prev | next [-]

I suspect that for a large number of orgs accepting over-provisioning would be significantly cheaper than the headcount required for a more sophisticated approach while allowing faster movement due to lower overall complexity

CuriouslyC 2 hours ago | parent | prev [-]

I love hetzner for internal resources because they're not spikey. For external stuff I like to do co-processing, you can load balance to cloudflare/aws/gcp services like containers/Run/App Runner/etc.