| ▲ | eYrKEC2 10 hours ago |
| You know what I think is better than a push of the CPU stack pointer and a jump to a library? A network call. Because nothing could be better for your code than putting the INTERNET into the middle of your application. -- The "micro" of microservices has always been ridiculous. If it can run on one machine then do it. Otherwise you have to deal with networking. Only do networking when you have to. Not as a hobby, unless your program really is a hobby. |
|
| ▲ | NeutralCrane 7 hours ago | parent | next [-] |
| Microservices have nothing to do with the underlying hosting architecture. Microservices can all run and communicate on a single machine. There will be a local network involved, but it absolutely does require the internet or multiple machines. |
|
| ▲ | yieldcrv 9 hours ago | parent | prev | next [-] |
| it's not really "micro" but more so "discreet" as in special purpose, one off. to ensure consistent performance, as opposed to shared performance. yes, networking is the bottleneck between the processes, while one machine is the bottleneck to end users |
| |
| ▲ | Nextgrid 9 hours ago | parent [-] | | > one machine is the bottleneck to end users You can run your monolith on multiple machines and round-robin end-user requests between them. Your state is in the DB anyway. | | |
| ▲ | yieldcrv 8 hours ago | parent [-] | | I do bare metal sometimes and I like the advances in virtualization for many processes there too |
|
|
|
| ▲ | ikiris 9 hours ago | parent | prev [-] |
| Not everything you think you know is right. https://github.com/sirupsen/napkin-math |
| |
| ▲ | josephg 9 hours ago | parent [-] | | Well implemented network hardware can have high bandwidth and low latency. But that doesn't get around the complexity and headaches it brings. Even with the best fiber optics, wires can be cut or tripped over. Controllers can fail. Drivers can be buggy. Networks can be misconfigured. And so on. Any request - even sent over a local network - can and will fail on you eventually. And you can't really make a microservice system keep working properly when links start failing. Local function calls are infinitely more reliable. The main operational downside with a binary monolith is that a bug in one part of the program will crash the whole thing. Honestly, I still think Erlang got it right here with supervisor trees. Use "microservices". But let them all live on the same computer, in the same process. And add tooling to the runtime environment to allow individual "services" to fail or get replaced without taking down the rest of the system. |
|