Remix.run Logo
9dev a day ago

You don't. When your server crashes, your availability is zero. It might crash because of a myriad of reasons; at some times, you might need to update the kernel to patch a security issue for example, and are forced to take your app down yourself.

If your business can afford irregular downtime, by all means, go for it. Otherwise, you'll need to take precautions, and that will invariably make the system more complex than that.

macspoofing a day ago | parent | next [-]

>You don't. When your server crashes, your availability is zero.

As your business needs grow, you can start layering complexity on top. The point is you don't start at 11 with a overly complex architecture.

In your example, if your server crashes, just make sure you have some sort of automatic restart. In practice that may mean a downtime of seconds for your 12 users. Is that more complexity? Sure - but not much. If you need to take your service down for maintenance, you notify your 12 users and schedule it for 2am ... etc.

Later you could create a secondary cluster and stick a load-balancer in-front. You could also add a secondary replicated PostgreSQL instance. So the monolith/postgres architecture can actually take you far as your business grows.

BillinghamJ a day ago | parent [-]

Changing/layering architecture adds risk. If you've got a standard way of working you can easily throw in on day one whose fundamentals then don't need to be changed for years, that's way lower risk, easier, faster

It is common for founding engineers to start with a preexisting way of working that they import from their previous more-scaled company, and that approach is refined and compounded over time

It does mean starting with more than is necessary at the start, but that doesn't mean it has to be particularly complex. It means you start with heaps of already-solved problems that you simply never have to deal with, allowing focus on the product goals and deep technical investments that need to be specific to the new company

wouldbecouldbe a day ago | parent | prev | next [-]

Yeah theoretically that sounds good. But I had more downtime through cloud outages, Kubernetes updates then I ever had using simple linux server with nginx on hardware; most outages I had on linux was with my VPS was due to Digital Ocean issue with their own hardware failures. AWS was down not so long ago.

And if certain servers do get very important you just run a backup server with VPS and switch over DNS (even if you keep a high ttl, most servers update within minutes nowadays) or if you want to be fancy throw a load balancer in front of it.

If you solve issues in a few minutes people are always thankful, and most dont notice. With complicated setups it tends to take much longer before figuring out what the issue is in the first place.

danmaz74 a day ago | parent | prev | next [-]

You can have redundancy with a monolithic architecture. Just have two different web server behind a proxy, and use postgres with a hot standby (or use a managed postgres instance which already has that).

pjmlp a day ago | parent | prev | next [-]

Well, load balancers are an option.

9dev a day ago | parent [-]

They are: But now you've expanded the definition of "a single monolith with postgres" to multiple replicas that need to be updated in sync, you've suddenly got shared state across multiple, fully isolated processes (in the best case) or running on multiple nodes (in the worst case), and a myriad of other subtle gotchas you need to account for, which raises the overall complexity considerably.

pjmlp a day ago | parent [-]

Postgres.

sfn42 a day ago | parent | prev [-]

I don't see how you solve this with microservices. You'll have to take down your services in these situations too, a monolith vs microservices soup has the exact same problem.

Also in 5 years of working on both microservicy systems and monoliths, not once has these things you describe been a problem for me. Everything I've hosted in Azure has been perfectly available pretty much all the time unless a developer messed up or Azure itself has downtime that would have taken down either kind of app anyway.

But sure let's make our app 100 times more complicated because maybe some time in the next 10 years the complexity might save us an hour of downtime. I'd say it's more likely the added complexity will cause more downtime than it saves.

9dev a day ago | parent [-]

> I don't see how you solve this with microservices.

I don't think I implied that microservices are the solution, really. You can have a replicated monolith, but that absolutely adds complexity of its own.

> But sure let's make our app 100 times more complicated because maybe some time in the next 10 years the complexity might save us an hour of downtime.

Adding replicas and load balancing doesn't have to be a hundred times more complex.

> I'd say it's more likely the added complexity will cause more downtime than it saves.

As I said before, this is an assessment you will need to make for your use case, and balance uptime requirements against your complexity budget; either answer is valid, as long as you feel confident with it. Only a Sith believes in absolutes.