| ▲ | 9dev a day ago | ||||||||||||||||
You don't. When your server crashes, your availability is zero. It might crash because of a myriad of reasons; at some times, you might need to update the kernel to patch a security issue for example, and are forced to take your app down yourself. If your business can afford irregular downtime, by all means, go for it. Otherwise, you'll need to take precautions, and that will invariably make the system more complex than that. | |||||||||||||||||
| ▲ | macspoofing a day ago | parent | next [-] | ||||||||||||||||
>You don't. When your server crashes, your availability is zero. As your business needs grow, you can start layering complexity on top. The point is you don't start at 11 with a overly complex architecture. In your example, if your server crashes, just make sure you have some sort of automatic restart. In practice that may mean a downtime of seconds for your 12 users. Is that more complexity? Sure - but not much. If you need to take your service down for maintenance, you notify your 12 users and schedule it for 2am ... etc. Later you could create a secondary cluster and stick a load-balancer in-front. You could also add a secondary replicated PostgreSQL instance. So the monolith/postgres architecture can actually take you far as your business grows. | |||||||||||||||||
| |||||||||||||||||
| ▲ | wouldbecouldbe a day ago | parent | prev | next [-] | ||||||||||||||||
Yeah theoretically that sounds good. But I had more downtime through cloud outages, Kubernetes updates then I ever had using simple linux server with nginx on hardware; most outages I had on linux was with my VPS was due to Digital Ocean issue with their own hardware failures. AWS was down not so long ago. And if certain servers do get very important you just run a backup server with VPS and switch over DNS (even if you keep a high ttl, most servers update within minutes nowadays) or if you want to be fancy throw a load balancer in front of it. If you solve issues in a few minutes people are always thankful, and most dont notice. With complicated setups it tends to take much longer before figuring out what the issue is in the first place. | |||||||||||||||||
| ▲ | danmaz74 a day ago | parent | prev | next [-] | ||||||||||||||||
You can have redundancy with a monolithic architecture. Just have two different web server behind a proxy, and use postgres with a hot standby (or use a managed postgres instance which already has that). | |||||||||||||||||
| ▲ | pjmlp a day ago | parent | prev | next [-] | ||||||||||||||||
Well, load balancers are an option. | |||||||||||||||||
| |||||||||||||||||
| ▲ | sfn42 a day ago | parent | prev [-] | ||||||||||||||||
I don't see how you solve this with microservices. You'll have to take down your services in these situations too, a monolith vs microservices soup has the exact same problem. Also in 5 years of working on both microservicy systems and monoliths, not once has these things you describe been a problem for me. Everything I've hosted in Azure has been perfectly available pretty much all the time unless a developer messed up or Azure itself has downtime that would have taken down either kind of app anyway. But sure let's make our app 100 times more complicated because maybe some time in the next 10 years the complexity might save us an hour of downtime. I'd say it's more likely the added complexity will cause more downtime than it saves. | |||||||||||||||||
| |||||||||||||||||