| ▲ | strken a day ago |
| I don't see why it's ridiculous to have an architectural barrier for org reasons. Requiring every component to be behind a network call seems like overkill in nearly all cases, but encapsulating complexity into a library where domain experts can maintain it is how most software gets built. You've got to lock those demons away where they can't affect the rest of the users. |
|
| ▲ | vbezhenar a day ago | parent | next [-] |
| The problem is, that library usually does not provide good enough boundaries. C library can just shit over your process memory. Java library can cause all the hell over your objects with reflection, can just call System.exit(LOL). Minimal boundary to keep demons at bay is process boundary and you need some way for processes to talk to each other. If you're separating components into processes, it's very natural to put them to different machines, so you need your IPC to be network calls. One more step and you're implementing REST, because infra people love HTTP. |
| |
| ▲ | sevensor a day ago | parent | next [-] | | > it's very natural to put them to different machines, so you need your IPC to be network calls But why is this natural? I’m not saying we shouldn’t have network RPC, but it’s not obvious to me that we should have only network RPC when there are cheap local IPC mechanisms. | | |
| ▲ | vbezhenar a day ago | parent [-] | | Because horizontal scaling is the best scaling method. Moving services to different machines is the easiest way to scale. Of course you can keep them in the same machine until you actually need to scale (may be forever), but it makes sense to make some architectural decisions early, which would not prevent scaling in the future, if the need arises. Premature optimisation is the root of all evil. But premature pessimisation is not a good thing either. You should keep options open, unless you have a good reason not to do so. If your IPC involves moving gigabytes of transient data between components, may be it's a good thing to use shared memory. But usually that's not required. | | |
| ▲ | strken 21 hours ago | parent [-] | | I'm not sure I see that horizontally scaling necessarily requires a network call between two hosts. If you have an API gateway service, a user auth service, a projects service, and a search service, then some of them will be lightweight enough that they can reasonably run on the same host together. If you deploy the user auth and projects services together then you can horizontally scale the number of hosts they're deployed on without introducing a network call between them. This is somewhat common in containerisation where e.g. Kubernetes lets you set up sidecars for logging and so on, but I suspect it could go a lot further. Many microservices aren't doing big fan-out calls and don't require much in the way of hardware. |
|
| |
| ▲ | pjmlp a day ago | parent | prev [-] | | And then we're back to 1980's UNIX process model before wide adoption of dynamic loading, but because we need to be cool we call them microservices. |
|
|
| ▲ | kace91 a day ago | parent | prev [-] |
| >Requiring every component to be behind a network call seems like overkill in nearly all cases That’s what I was referring to, sorry for the inaccurate adjective. Most people try to split a monolith in domains, move code as libraries, or stuff like that - but IMO you rarely avoid a shared space importing the subdomains, with blurry/leaky boundaries, and with ownership falling between the cracks. Micro services predispose better to avoid that shared space, as there is less expectation of an orchestrating common space. But as you say the cost is ridiculous. I think there’s an unfilled space for an architectural design that somehow enforces boundaries and avoids common spaces as strongly as microservices do, without the physical separation. |
| |
| ▲ | sevensor a day ago | parent [-] | | How about old fashioned interprocess communication? You can have separate codebases, written in different languages, with different responsibilities, running on the same computer. Way fewer moving parts than RPC over a network. |
|