Remix.run Logo
weitendorf 2 hours ago

1. Principle of least privilege is very important if you are interacting with a large-ish number of third party APIs. It's not just about data ownership (ie one service per db) or even limiting blast radius (most likely you won't get hacked either way), it's eliminating a single point of failure that makes you a more attractive/risky target in case of a hack either directly on your infrastructure, or through a member of your team, or improperly wielding automation.

2. Having at least some level of ability ro run heterogenous workloads in your production environment (ie being able to flip a switch and do microservices if you decide to) is very useful if you need to do more complicated migrations or integrate OSS/vendor software/whip up a demo on short notice. Because oftentimes you may not want to "do microservices" ideologically or as a focal point for development, but you can easily end up in a situation where you want "a microservice", and there can be an unnecessarily large number of obstacles to doing that if you've built all your tooling and ops around the assumption of "never microservices"

3. If you're working with open source software products and infra a lot it's just way easier to eg launch a stalwart container to do email hosting than to figure out how to implement email hosting in your existing db and monolith. Also see above, if you find a good OSS project to help you do something much faster or more effectively, it's good for it to be easy to put it in prod.

4. Claude Code and less experienced or skilled developers don't understand separation of concerns. Now that agentic development is picking up, even orgs that didn't need the organizational convenience before may find themselves needing it now. Personally, this has been a major consideration in how I structure new projects once I became aware of the problem.

CuriouslyC an hour ago | parent [-]

The architecture I like is either modular monoliths, or if you really need isolation, a FaaS setup with a single shared function library, so you're not calling from one function service to another but just composing code into a single endpoint that does exactly what you need.

HTTP RPC is the devil.

weitendorf an hour ago | parent [-]

I generally agree although I think gRPC and using it with json is awesome, it's just that like with most of these tools, the investment in setup/tooling/integrating them into your work has to be worth what you get out of them.

I actually used to do FaaS at GCP and now through my startup am working on a few projects for cloud services, tooling around them, and composable/linkable FaaS if you're interested! A lot of this logic isn't in our public repos yet but hit me up if you want a try fully automated Golang->proto+gRPC/http+openapi+ts bindings+client generation+linker-based-builds.

A project we started for "modular monoliths" that need local state but not super high availability as databases is at https://github.com/accretional/collector, but I had to put it on pause for the past few weeks to work on other stuff and figure out a better way to scale it. Basically it's a proto ORM based on sqlite that auto configures a bunch of CRUD/search and management endpoints for each "Collection", and via a CollectionRepo additional Collections can be created at runtime. The main thing I invested time into was backups/cloning these sqlite dbs since that's what allows you to scale up and down more flexibly than a typical db and not worry as much about availability or misconfiguration.