Remix.run Logo
isodev 2 days ago

I'm not sure why your architecture needs to be complex to support CI pipelines and proper workflow for change management.

And some of these guidelines have grown into satus quo common recipes. Take your starting database for example, the guideline is always "sqlite only for testing, but for production you want Postgres" - it's misleading and absolutely unnecessary. These defaults have also become embedded into PaaS services e.g. the likes of Fly or Scaleway - having a disk attached to a VM instance where you can write data is never a default and usually complicated or expensive to setup. All while there is nothing wrong with a disk that gets backed up - it can support most modern mid sized apps out there before you need block storage and what not.

9dev a day ago | parent | next [-]

I've been involved in bootstrapping the infrastructure for several companies. You always start small, and add more components over time. I dare say, on the projects I was involved, we were fairly successful in balancing complexity, but some things really just make sense. Using a container orchestration tool spares you from tending to actual Linux servers, for example, that need updates and firewalls and IP addresses and managing SSH keys properly. The complexity is still there, but it shifts somewhere else. Looking at the big picture, that might mean your knowledge requirements ease on the systems administration stuff, and tighten on the cloud provider/IaC end; that might be a good trade off if you're working with a team of younger software engineers that don't have a strong Linux background, for example, which I assume is pretty common these days.

Or, consider redundancy: Your customers likely expect your service to not have an outage. That's a simple requirement, but very hard to get right, especially if you're using a single server that provides your application. Just introducing multiple copies of the app running in parallel comes with changes required in the app (you can't assume replica #1 will handle the first and second request—except if you jump through sticky session hoops, which is a rabbit hole on its own), in your networking (HTTP requests to the domain must be sent to multiple destinations), and your deployment process (artefacts must go to multiple places, restarts need to be choreographed).

Many teams (in my experience) that have a disdain for complex solutions will choose their own, bespoke way of solving these issues one by one, only to end up in a corner of their own making.

I guess what I'm saying is pretty mundane actually—solve the right problem at the right time, but no later.

zelphirkalt a day ago | parent | prev | next [-]

Having recently built a Django app, I feel like I need to highlight the issues coming with using sqlite. Once you get into many to many relationships in your model, suddenly all kinds of things are not supported by sqlite, while they are when you use postgres. This also shows, that you actually cannot (!) use sqlite for testing, because it behaves significantly differently from postgres.

So I think now: Unless you have a really really simple model and app, you are just better off simply starting postgres or a postgres container.

isodev a day ago | parent [-]

My comment is that this is a choice that should be made for each project depending on what you’re building - does your model require features not supported by SQLite or Postgres etc.

> Unless you have a really really simple model and app

And this is the wrong conclusion. I have a really really complex model that works just fine with SQlite. So it’s not about how complex the model is, it’s about what you need. In the same way in the original post there were so many storage types, no doubt because of such “common knowledge guidelines”

zelphirkalt a day ago | parent [-]

OK, well, you don't always know all requirements ahead of time. When I do find out about them later on, I don't want to have to switch database backend then. For example initially I thought I would avoid those many to many relationships all together ... But turned out to be the most fitting way to do what I needed to do in Django.

I guess you could say "use sqlite as long as it lends itself well to what you are doing", sure. But when do you switch? At the first inconvenience? Or do you wait a while, until N inconveniences have been put into the codebase? And not to forget, the organizational resistance to things like changing the database. People not in the know (mangement usually) might question your plan to switch the database, because this workaround for this small little inconvenience _right now_ seems much less work and less risky for production ... Before you know it, you will have 10 workarounds in there, and sunken cost fallacy.

I may be exaggerating a little bit, but it's not like this is a crazy to imagine picture I am painting here.

isodev a day ago | parent [-]

You're right, and it's ok to lean on experience to anticipate certain constraints for a project. My point really is that is just not an absolute default and it should not be included as a "general guideline" or recommendation in documentation, tutorial and blogposts. There is also a substantial difference between SMEs and bigger corporate situations where architecture changes are practically religious.

Changing the database can create friction, but at that moment you can also ask yourself: What is the cost of adding/learning this giant stateful component with maintenance needs (postgres) vs. say adapting our schema to be more compatible with what we have? (e.g. the lightweight and much cheaper sqlite, but the argument works for whatever you already have).

I'd much rather see folks thinking about that. Same for caching and CDNs and whatever Cloudflare is selling this week to hook people on their platform (e.g. DDoS/API gateway protections come in many variants, we're not all 1password and sometimes it's ok to just turn on the firewall from your hosting provider).

hinkley a day ago | parent | prev [-]

Years ago we had someone who wanted to make sure that two deployments were mutually exclusive. Can’t recall why now, but something with a test environment and bootstrapping so no redundancy.

I just set one build agent up with a tag that both plans required. The simplest thing that could possibly work.