| ▲ | isodev 2 days ago | |||||||||||||||||||||||||
I'm not sure why your architecture needs to be complex to support CI pipelines and proper workflow for change management. And some of these guidelines have grown into satus quo common recipes. Take your starting database for example, the guideline is always "sqlite only for testing, but for production you want Postgres" - it's misleading and absolutely unnecessary. These defaults have also become embedded into PaaS services e.g. the likes of Fly or Scaleway - having a disk attached to a VM instance where you can write data is never a default and usually complicated or expensive to setup. All while there is nothing wrong with a disk that gets backed up - it can support most modern mid sized apps out there before you need block storage and what not. | ||||||||||||||||||||||||||
| ▲ | 9dev a day ago | parent | next [-] | |||||||||||||||||||||||||
I've been involved in bootstrapping the infrastructure for several companies. You always start small, and add more components over time. I dare say, on the projects I was involved, we were fairly successful in balancing complexity, but some things really just make sense. Using a container orchestration tool spares you from tending to actual Linux servers, for example, that need updates and firewalls and IP addresses and managing SSH keys properly. The complexity is still there, but it shifts somewhere else. Looking at the big picture, that might mean your knowledge requirements ease on the systems administration stuff, and tighten on the cloud provider/IaC end; that might be a good trade off if you're working with a team of younger software engineers that don't have a strong Linux background, for example, which I assume is pretty common these days. Or, consider redundancy: Your customers likely expect your service to not have an outage. That's a simple requirement, but very hard to get right, especially if you're using a single server that provides your application. Just introducing multiple copies of the app running in parallel comes with changes required in the app (you can't assume replica #1 will handle the first and second request—except if you jump through sticky session hoops, which is a rabbit hole on its own), in your networking (HTTP requests to the domain must be sent to multiple destinations), and your deployment process (artefacts must go to multiple places, restarts need to be choreographed). Many teams (in my experience) that have a disdain for complex solutions will choose their own, bespoke way of solving these issues one by one, only to end up in a corner of their own making. I guess what I'm saying is pretty mundane actually—solve the right problem at the right time, but no later. | ||||||||||||||||||||||||||
| ▲ | zelphirkalt a day ago | parent | prev | next [-] | |||||||||||||||||||||||||
Having recently built a Django app, I feel like I need to highlight the issues coming with using sqlite. Once you get into many to many relationships in your model, suddenly all kinds of things are not supported by sqlite, while they are when you use postgres. This also shows, that you actually cannot (!) use sqlite for testing, because it behaves significantly differently from postgres. So I think now: Unless you have a really really simple model and app, you are just better off simply starting postgres or a postgres container. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||
| ▲ | hinkley a day ago | parent | prev [-] | |||||||||||||||||||||||||
Years ago we had someone who wanted to make sure that two deployments were mutually exclusive. Can’t recall why now, but something with a test environment and bootstrapping so no redundancy. I just set one build agent up with a tag that both plans required. The simplest thing that could possibly work. | ||||||||||||||||||||||||||