| ▲ | threethirtytwo 6 hours ago | |||||||
>This is only true if you change the shared type in a way which is not backwards compatible. One of the major tenets of services is that you must not introduce backwards incompatible changes. If you want to make a fundamental change, the process isn't "change APIv1 to APIv2", it's "deploy APIv2 alongside APIv1, mark APIv1 as deprecated, migrate clients to APIv2, remove APIv1 when there's no usage." Agreed and this is a negative. Backwards compatibility is a restriction made to deal with something fundamentally broken. Additionally eventually in any system of services you will have to make a breaking change. Backwards compatibility is a behavioral comping mechanism to deal with a fundamental issue of microservices. >This may seem arduous, but the reality is that most monoliths already deal with this limitation! Don't believe me? Think about a typical n-tier architecture with a backend that talks to a database; how do you do a naive, simple rename of a database column in e.g. MySQL in a zero-downtime manner? You can't. You need to have some strategy for dealing with the backwards incompatibility. I believe you and am already aware. It's a limitation that exists intrinsically so it exists because you have No choice. A database and a monolith needs to exist as separate services. The thing I'm addressing here is the microservices and monolith debate. If you choose microservices, you are CHOOSING for this additional problem to exist. If you choose monolith, then within that monolith you are CHOOSING for those problems to not exist. I am saying regardless of the other issues with either architecture, this one is an invariant in the sense that for this specific thing, monolith is categorically better. >Having seen the logical outcome of this at AWS, Hootsuite, Splunk, among others: no this isn't true at all really. e.g. The RDS team operated services independently of the EC2 team, despite calling out to EC2 in the backend; in no way was it a distributed monolith. No you're categorically wrong. If they did this in ANY of the companies you worked at then they are Living with this issue. What I'm saying here isn't an opinion. It is a theorem based consequence that will occur IF all the axioms are satisfied: namely >2 services that communicate with each other and ARE not deployed simultaneously. This is logic. The only way errors or issues never happened with any of the teams you worked with is if the services they were building NEVER needed to make a breaking change to the communication channel, or they never needed to communicate. Neither of these scenarios is practical. | ||||||||
| ▲ | mjr00 6 hours ago | parent | next [-] | |||||||
> The only way errors or issues never happened with any of the teams you worked with is if the services they were building NEVER needed to make a breaking change to the communication channel, or they never needed to communicate. Neither of these scenarios is practical. IMO the fundamental point of disagreement here is that you believe it is effectively impossible to evolve APIs without breaking changes. I don't know what to tell you other than, I've seen it happen, at scale, in multiple organizations. I can't say that EC2 will never made a breaking change that causes RDS, lambda, auto-scaling to break, but if they do, it'll be front page news. | ||||||||
| ||||||||
| ▲ | kccqzy 6 hours ago | parent | prev [-] | |||||||
> The only way errors or issues never happened with any of the teams you worked with is if the services they were building NEVER needed to make a breaking change to the communication channel, or they never needed to communicate. This is correct. > Neither of these scenarios is practical. This is not. When you choose appropriate tools (protobuf being an example), it is extremely easy to make a non-breaking change to the communication channel, and it is also extremely easy to prevent breaking changes from being made ever. | ||||||||
| ||||||||