| ▲ | vbezhenar a day ago |
| The problem is, that library usually does not provide good enough boundaries. C library can just shit over your process memory. Java library can cause all the hell over your objects with reflection, can just call System.exit(LOL). Minimal boundary to keep demons at bay is process boundary and you need some way for processes to talk to each other. If you're separating components into processes, it's very natural to put them to different machines, so you need your IPC to be network calls. One more step and you're implementing REST, because infra people love HTTP. |
|
| ▲ | sevensor a day ago | parent | next [-] |
| > it's very natural to put them to different machines, so you need your IPC to be network calls But why is this natural? I’m not saying we shouldn’t have network RPC, but it’s not obvious to me that we should have only network RPC when there are cheap local IPC mechanisms. |
| |
| ▲ | vbezhenar a day ago | parent [-] | | Because horizontal scaling is the best scaling method. Moving services to different machines is the easiest way to scale. Of course you can keep them in the same machine until you actually need to scale (may be forever), but it makes sense to make some architectural decisions early, which would not prevent scaling in the future, if the need arises. Premature optimisation is the root of all evil. But premature pessimisation is not a good thing either. You should keep options open, unless you have a good reason not to do so. If your IPC involves moving gigabytes of transient data between components, may be it's a good thing to use shared memory. But usually that's not required. | | |
| ▲ | strken 21 hours ago | parent [-] | | I'm not sure I see that horizontally scaling necessarily requires a network call between two hosts. If you have an API gateway service, a user auth service, a projects service, and a search service, then some of them will be lightweight enough that they can reasonably run on the same host together. If you deploy the user auth and projects services together then you can horizontally scale the number of hosts they're deployed on without introducing a network call between them. This is somewhat common in containerisation where e.g. Kubernetes lets you set up sidecars for logging and so on, but I suspect it could go a lot further. Many microservices aren't doing big fan-out calls and don't require much in the way of hardware. |
|
|
|
| ▲ | pjmlp a day ago | parent | prev [-] |
| And then we're back to 1980's UNIX process model before wide adoption of dynamic loading, but because we need to be cool we call them microservices. |