Remix.run Logo
wowohwow 9 hours ago

I think your point while valid, it is probably a lot more nuanced. From the post it's more akin to an Amazon shared build and deployment system than "every library update needs to redeploy every time scenario".

It's likely there's a single source of truth where you pull libraries or shared resources from, when team A wants to update the pointer to library-latest to 2.0 but the current reference of library-latest is still 1.0, everyone needs to migrate off of it otherwise things will break due to backwards compatibility or whatever.

Likewise, if there's a -need- to remove a version for a vulnerability or what have you, then everyone needs to redeploy, sure, but the centralized benefit of this likely outweighs the security cost and complexity of tracking the patching and deployment process for each and every service.

I would say those systems -are- and likely would be classified as micro services but from a cost and ease perspective operate within a shared services environment. I don't think it's fair to consider this style of design decision as a distributed monolith.

By that level of logic, having a singular business entity vs 140 individual business entities for each service would mean it's a distributed monolith.

mjr00 9 hours ago | parent | next [-]

> It's likely there's a single source of truth for where you pull libraries or shared resources from, when team A wants to update the pointer to library-latest to 2.0 but the current reference of library-latest is still 1.0, everyone needs to migrate off of it otherwise things will break due to backwards compatibility or whatever.

No, this misses one of the biggest benefits of services; you explicitly don't need everyone to upgrade library-latest to 2.0 at the same time. If you do find yourself in a situation where you can't upgrade a core library like e.g. SQLAlchemy or Spring, or the underlying Python/Java/Go/etc runtime, without requiring updates to every service, you are back in the realm of a distributed monolith.

rbranson 6 hours ago | parent | next [-]

This is explicitly called out in the blog post in the trade-offs section.

I was one of the engineers who helped make the decisions around this migration. There is no one size fits all. We believed in that thinking originally, but after observing how things played out, decided to make different trade-offs.

nine_k 4 hours ago | parent | next [-]

To me it sounds like so: "We realized that we were not running microservice architecture, but rather a distributed monolith, so it made sense to make it a regular monolith". It's a decision I would wholeheartedly agree with.

necovek 2 hours ago | parent [-]

I don't think you read the post carefully enough: they were not running a distributed monolith, and every service was using different dependencies (versions of them).

This meant that it was costly to maintain and caused a lot of confusion, especially with internal dependencies (shared libraries): this is the trade-off they did not like and wanted to move away from.

They moved away from this in multiple steps, first one of those being making it a "distributed monolith" (as per your implied definition) by putting services in a monorepo and then making them use the same dependency versions (before finally making them a single service too).

nine_k 26 minutes ago | parent | next [-]

The blog post says that they had a microservice architecture, then introduced some common libraries which broke the assumptions of compatibility across versions, forcing mass updates if a common dependency was updated. This is when they realized that they were no longer running a microservice architecture, and fused everything into a proper monolith. I see no contradiction.

petersellers an hour ago | parent | prev | next [-]

I think the blog post is confusing in this regard. For example, it explicitly states:

> We no longer had to deploy 140+ services for a change to one of the shared libraries.

Taken in isolation, that is a strong indicator that they were indeed running a distributed monolith.

However, the blog post earlier on said that different microservices were using different versions of the library. If that was actually true, then they would never have to deploy all 140+ of their services in response to a single change in their shared library.

lukevp 12 minutes ago | parent | next [-]

Shared telemetry library, you realize that you are missing an important metric to operationalize your services. You now need to deploy all 140 to get the benefit.

Your runtime version is out of date / end of life. You now need to update and deploy all 140 (or at least all the ones that use the same tech stack).

No matter how you slice it, there are always dependencies across all services because there are standards in the environment in which they operate, and there are always going to be situations where you have to redeploy everything or large swaths of things.

Microservices aren’t a panacea. They just let you delay the inevitable but there is gonna be a point where you’re forced to comply with a standard somewhere that changes in a way that services must be updated. A lot of teams use shared libraries for this functionality.

raphinou 12 minutes ago | parent | prev [-]

Except if it's a security fix?

tstrimple an hour ago | parent | prev [-]

If a change requires cascading changes in almost every other service then yes, you're running a distributed monolith and have achieved zero separation of services. Doesn't matter if each "service" has a different stack if they are so tightly coupled that a change in one necessitates a change in all. This is literally the entire point of micro-services. To reduce the amount of communication and coordination needed among teams. When your team releases "micro-services" which break everything else, it's a failure and hint of a distributed monolith pretending to be micro-services.

wowohwow 4 hours ago | parent | prev | next [-]

FWIW, I think it was a great write up. It's clear to me what the rationale was and had good justification. Based on the people responding to all of my comments, it is clear people didn't actually read it and are opining without appropriate context.

mjr00 6 hours ago | parent | prev [-]

> There is no one size fits all.

Totally agree. For what it's worth, based on the limited information in the article, I actually do think it was the right decision to pull all of the per-destination services back into one. The shared library problem can go both ways, after all: maybe the solution is to remove the library so your microservices are fully independent, or maybe they really should have never been independent in the first place and the solution is to put them back together.

I don't think either extreme of "every line of code in the company is deployed as one service" or "every function is an independent FaaS" really works in practice, it's all about finding the right balance, which is domain-specific every time.

mekoka 6 hours ago | parent | prev | next [-]

You're both right, but talking past each other. You're right that shared dependencies create a problem, but it can be the problem without semantically redefining the services themselves as a shared monolith. Imagine someone came to you with a similar problem and you concluded "distributed monolith", which may lead them to believe that their services should be merged into a single monolith. What if they then told you that it's going to be tough because these were truly separate apps, but that used the same OS wide Python install, one ran on Django/Postgres, another on Flask/SQLite, and another was on Fastapi/Mongo, but they all relied on some of the same underlying libs that are frequently updated. The more accurate finger should point to bad dependency management and you'd tell them about virtualenv or docker.

wowohwow 9 hours ago | parent | prev | next [-]

I disagree. Both can be true at the same time. A good design should not point to library-latest in a production setting, it should point to a stable known good version via direct reference, i.e library-1.0.0-stable.

However, the world we live in, people choose pointing to latest, to avoid manual work and trust other teams did the right diligence when updating to the latest version.

You can point to a stable version in the model I described and still be distributed and a micro service, while depending on a shared service or repository.

vlovich123 9 hours ago | parent [-]

You can do that but you keep missing that you’re no longer a true microservice as originally defined and envisioned, which is that you can deploy the service independently under local control.

Can you imagine if Google could only release a new API if all their customers simultaneously updated to that new API? You need loose coupling between services.

OP is correct that you are indeed now in a weird hybrid monolith application where it’s deployed piecemeal but can’t really be deployed that way because of tightly coupled dependencies.

Be ready for a blog post in ten years how they broke apart the monolith into loosely coupled components because it was too difficult to ship things with a large team and actually have it land in production without getting reverted to an unrelated issue.

GeneralMayhem 8 hours ago | parent | next [-]

Internal and external have wildly different requirements. Google internally can't update a library unless the update is either backward-compatible for all current users or part of the same change that updates all those users, and that's enforced by the build/test harness. That was an explicit choice, and I think an excellent one, for that scenario: it's more important to be certain that you're done when you move forward, so that it's obvious when a feature no longer needs support, than it is to enable moving faster in "isolation" when you all work for the same company anyway.

But also, you're conflating code and services. There's a huge difference between libraries that are deployed as part of various binaries and those that are used as remote APIs. If you want to update a utility library that's used by importing code, then you don't need simultaneous deployment, but you would like to update everywhere to get it done with - that's only really possible with a monorepo. If you want to update a remote API without downtime, then you need a multi-phase rollout where you introduce a backward-compatibility mode... but that's true whether you store the code in one place or two.

vlovich123 7 hours ago | parent [-]

The whole premise of microservices is loose coupling - external just makes it plainly obvious that it’s a non starter. If you’re not loosely coupling you can call it microservices but it’s not really.

Yes I understand it’s a shared library but if updating that shared library automatically updates everyone and isn’t backward compatible you’re doing it wrong - that library should be published as a v2 or dependents should pin to a specific version. But having a shared library that has backward incompatible changes that is automatically vendored into all downstream dependencies is insane. You literally wouldn’t be able to keep track of your BOM in version control as it obtains a time component based on when you built the service and the version that was published in the registry.

GeneralMayhem 6 hours ago | parent [-]

> if updating that shared library automatically updates everyone and isn’t backward compatible you’re doing it wrong that library should be published as a v2 or dependents should pin to a specific version

...but why? You're begging the question.

If you can automatically update everyone including running their tests and making any necessary changes to their code, then persisting two versions forever is a waste of time. If it's because you can't be certain from testing that it's actually a safe change, then fine, but note that that option is still available to you by copy/pasting to a v2/ or adding a feature flag. Going to a monorepo gives you strictly more options in how to deal with changes.

> You literally wouldn’t be able to keep track of your BOM in version control as it obtains a time component based on when you built the service

This is true regardless of deployment pattern. The artifact that you publish needs to have pointers back to all changes that went into it/what commit it was built at. Mono vs. multi-repo doesn't materially change that, although I would argue it's slightly easier with a monorepo since you can look at the single history of the repository, rather than having to go an extra hop to find out what version 1.0.837 of your dependency included.

> the version that was published in the registry

Maybe I'm misunderstanding what you're getting at, but monorepo dependencies typically don't have a registry - you just have the commit history. If a binary is built at commit X, then all commits before X across all dependencies are included. That's kind of the point.

vlovich123 3 hours ago | parent [-]

> ...but why? You're begging the question. If you can automatically update everyone including running their tests and making any necessary changes to their code, then persisting two versions forever is a waste of time.

I’m not begging the question. I’m simply stating what loose coupling looks like and the blog post is precisely the problem of tight coupling. If you have multiple teams working on a tightly coupled system you’re asking for trouble. This is why software projects inevitably decompose against team boundaries and you ship your org chart - communication and complexity is really hard to manage as the head count grows which is where loose coupling helps.

But this article isn’t about moving from federated codebases to a single monorepo as you propose. They used that as an intermediary step to then enable making it a single service. But the point is that making a single giant service is well studied and a problem. Had this constantly at Apple when I worked on CoreLocation where locationd was a single service that was responsible for so many things (GPS, time synchronization of Apple Watches, WiFi location, motion, etc) that there was an entire team managing the process of getting everything to work correctly within a single service and even still people constantly stepped on each other’s toes accidentally and caused builds that were not suitable. It was a mess and the team that should have identified it as a bottleneck in need of solving (ie splitting out separate loosely coupled services) instead just kept rearranging deck chairs.

> Maybe I'm misunderstanding what you're getting at, but monorepo dependencies typically don't have a registry - you just have the commit history

I’m not opposed to a monorepo which I think may be where your confusion is coming from. I’m suggesting slamming a bunch of microservices back together is a poorly thought out idea because you’ll still end up with a launch coordination bottleneck and rolling back 1 team’s work forces other teams to roll back as well. It’s great the person in charge got to write a ra ra blog post for their promo packet. Come talk to me in 3 years with actual on the ground engineers saying they are having no difficulty shipping a large tightly coupled monolithic service or that they haven’t had to build out a team to help architect a service where all the different teams can safely and correctly coexist. My point about the registry is that they took one problem - a shared library multiple services depend on through a registry depend on latest causing problems deploying - and nuked it from orbit using a monorepo (ok - this is fine and a good solution - I can be a fan of monorepos provided your infrastructure can make it work) and making a monolithic service (probably not a good idea that only sounds good when you’re looking for things to do).

necovek 2 hours ago | parent [-]

> I’m not begging the question. I’m simply stating what loose coupling looks like and the blog post is precisely the problem of tight coupling.

But it is not! They were updating dependencies and deploying services separately, and this led to every one of 140 services using a different version of "shared-foo". This made it cumbersome, confusing and expensive to keep going (you want a new feature from shared-foo, you have to take all the other features unless you fork and cherrypick on top, which makes it a not shared-foo anymore).

The point is that true microservice approach will always lead to exactly this situation: a) you either do not extract shared functions and live with duplicate implementations, b) you enforce keeping your shared dependencies always on very-close-to-latest (which you can do with different strategies; monorepo is one that enables but does not require it) or c) you end up with a mess of versions being used by each individual service.

The most common middle ground is to insist on backwards compatibility in a shared-lib, but carrying that over 5+ years is... expensive. You can mix it with an "enforce update" approach ("no version older than 2 years can be used"), but all the problems are pretty evident and expected with any approach.

I'd always err on the side of having a capability to upgrade at once if needed, while keeping the ability to keep a single service on a pinned version. This is usually not too hard with any approach, though monorepo makes the first one appear easier (you edit one file, or multiple dep files in a single repo): but unless you can guarantee all services get replaced in a deployment at exactly the same moment — which you rarely can — or can accept short lived inconsistencies, deployment requires all services to be backwards compatible until they are all updated with either approach).

I'd also say that this is still not a move to a monolith, but to a Service-Oriented-Architecture that is not microservices (as microservices are also SOA): as usual, the middle ground is the sweet spot.

wowohwow 8 hours ago | parent | prev | next [-]

To reference my other comment. This thread is about the nuance of if a dependency on a shared software repository means you are a microservice or not. I'm saying it's immaterial to the definition.

A dependency on an external software repository does not make a microservice no longer a microservice. It's the deployment configuration around said dependency that matters.

smaudet 7 hours ago | parent | prev | next [-]

> Be ready for a blog post in ten years how they broke apart the monolith into loosely coupled components because it was too difficult to ship things with a large team and actually have it land in production without getting reverted to an unrelated issue.

Some of their "solutions" I kind of wonder how they plan on resolving this, like the black box "magic" queue service they subbed back in, or the fault tolerance problem.

That said, I do think if you have a monolith that just needs to scale (single service that has to send to many places), they are possibly taking the correct approach. You can design your code/architecture so that you can deploy "services" separately, in a fault tolerant manner, but out of a mono repo instead of many independent repos.

vlovich123 3 hours ago | parent [-]

Why issue isn’t with the monorepo but slamming all the microservices into a single monolithic service (the last part of the blog post).

dmoy 8 hours ago | parent | prev [-]

> Can you imagine if Google could only release a new API if all their customers simultaneously updated to that new API? You need loose coupling between services.

Internal Google services: *sweating profusely*

(Mostly in jest, it's obviously a different ballgame internal to the monorepo on borg)

deaddodo 6 hours ago | parent | prev | next [-]

The dependencies they're likely referring to aren't core libraries, they're shared interfaces. If you're using protobufs, for instance, and you share the interfaces in a repo. Updating Service A's interface(s) necessitates all services dependent on communicating with it to be updated as well (whether you utilize those changes or not). Generally for larger systems, but smaller/scrappier teams, a true dependency management tree for something like this is out of scope so they just redeploy everything in a domain.

mjr00 6 hours ago | parent | next [-]

> If you're using protobufs, for instance, and you share the interfaces in a repo. Updating Service A's interface(s) necessitates all services dependent on communicating with it to be updated as well (whether you utilize those changes or not).

This is not true! This is one of the core strengths of protobuf. Non-destructive protobuf changes, such as adding new API methods or new fields, do not require clients to update. On the server-side you do need to handle the case when clients don't send you the new data--plus deal with the annoying "was this int64 actually set to 0 or is it just using the default?" problem--but as a whole you can absolutely independently update a protobuf, implement it on the server, and existing clients can keep on calling and be totally fine.

Now, that doesn't mean you can go crazy, as doing things like deleting fields, changing field numbering or renaming APIs will break clients, but this is just the reality of building distributed systems.

lowbloodsugar 5 hours ago | parent | prev [-]

Oh god no.

I mean I suppose you can make breaking changes to any API in any language, but that’s entirely on you.

lelanthran 2 hours ago | parent | prev | next [-]

Right, but theres a cost to having to support 12 different versions of a library in your system.

Its a tradeoff

philwelch 3 hours ago | parent | prev [-]

> If you do find yourself in a situation where you can't upgrade a core library like e.g. SQLAlchemy or Spring, or the underlying Python/Java/Go/etc runtime, without requiring updates to every service, you are back in the realm of a distributed monolith.

Show me a language runtime or core library that will never have a CVE. Otherwise, by your definition, microservices don’t exist and all service oriented architectures are distributed monoliths.

3rodents 9 hours ago | parent | prev [-]

Yes, you’re describing a distributed monolith. Microservices are independent, with nothing shared. They define a public interface and that’s it, that’s the entire exposed surface area. You will need to do major version bumps sometimes, when there are backwards incompatible changes to make, but these are rare.

The logical problem you’re running into is exactly why microservices are such a bad idea for most businesses. How many businesses can have entirely independent system components?

Almost all “microservice” systems in production are distributed monoliths. Real microservices are incredibly rare.

A mental model for true microservices is something akin to depending on the APIs of Netflix, Hulu, HBO Max and YouTube. They’ll have their own data models, their own versioning cycles and all that you consume is the public interface.

makeitdouble 3 hours ago | parent | next [-]

I'm trying to understand what you see as a really independent service with nothing shared.

For instance if company A used one of the GCP logging stack, and company B does the same. GCP updates it's profuct in a way that strongly encourages upgrading within a specific time frame (e.g. price will drastically increase otherwise), so A and B do it mostly at the same time for the same reason.

Are A and B truly independent under your vision ? or are they a company-spanning monolith ?

buttercraft 3 hours ago | parent [-]

> mostly at the same time

Mostly? If you can update A one week and B the next week with no breakage in between, that seems pretty independent.

makeitdouble 2 hours ago | parent [-]

This was also the case for the micro-service situation described in the article. From the FA:

> Over time, the versions of these shared libraries began to diverge across the different destination codebases.

wowohwow 9 hours ago | parent | prev [-]

This type of elitist mentality is such a problem and such a drain for software development. "Real micro services are incredibly rare". I'll repeat myself from my other post, by this level of logic nothing is a micro service.

Do you depend on a cloud provider? Not a microservice. Do you depend on an ISP for Internet? Not a microservice. Depend on humans to do something? Not a microservice.

Textbook definitions and reality rarely coincide, rather than taking such a fundamentalist approach that leads nowhere, recognize that for all intents and purposes, what I described is a microservice, not a distributed monolith.

ollysb 9 hours ago | parent | next [-]

It's fine to have dependencies, the point is two services that need to be deployed at the same time are not independent microservices.

wowohwow 9 hours ago | parent [-]

Yes, the user I'm replying to is suggesting that taking on a dependency of a shared software repository makes the service no longer a microservice.

That is fundamentally incorrect. As presented in my other post you can correctly use the shared repository as a dependency and refer to a stable version vs a dynamic version which is where the problem is presented.

jameshart 8 hours ago | parent | next [-]

The problem with having a shared library which multiple microservices depend on isn’t on the microservice side.

As long as the microservice owners are free to choose what dependencies to take and when to bump dependency versions, it’s fine - and microservice owners who take dependencies like that know that they are obliged to take security patch releases and need to plan for that. External library dependencies work like that and are absolutely fine for microservices to take.

The problem comes when you have a team in the company that owns a shared library, and where that team needs, in order to get their code into production, to prevail upon the various microservices that consume their code to bump versions and redeploy.

That is the path to a distributed monolith situation and one you want to avoid.

wowohwow 8 hours ago | parent [-]

Yes we are in agreement. A dependency on an external software repository does not make a microservice no longer a microservice. It's the deployment configuration around said dependency that matters.

8 hours ago | parent | prev [-]
[deleted]
3rodents 8 hours ago | parent | prev | next [-]

"by this level of logic nothing is a micro service"

Yes, exactly. The point is not elitism. Microservices are a valuable tool for a very specific problem but what most people refer to as "microservices" are not. Language is important when designing systems. Microservices are not just a bunch of separately deployable things.

The "micro" in "microservice" doesn't refer to how it is deployed, it refers to how the service is "micro" in responsibility. The service has a public interface defined in a contract that other components depend on, and that is it, what happens within the service is irrelevant to the rest of the system and vice verse, the service does not have depend on knowledge of the rest of the system. By virtue of being micro in responsibility, it can be deployed anywhere and anyhow.

If it is not a microservice, it is just a service, and when it is just a service, it is probably a part of a distributed monolith. And that is okay, a distributed monolith can be very valuable. The reason many people bristle at the mention of microservices is that they are often seen as an alternative to a monolith but they are not, it is a radically different architecture.

We must be precise in our language because if you or I build a system made up of "microservices" that aren't microservices, we're taking on all of the costs of microservices without any of the benefits. You can choose to drive to work, or take the bus, but you cannot choose to drive because it is the cheapest mode of transport or walk because it is the fastest. The costs and benefits are not independent.

The worst systems I have ever worked on were "microservices" with shared libraries. All of the costs of microservices (every call now involves a network) and none of the benefits (every service is dependent on the others). The architect of that system had read all about how great microservices are and understood it to mean separately deployable components.

There is no hierarchy of goodness, we are just in pursuit of the right tool or the job. A monolith, distributed monolith or a microservice architecture could be the right tool for one problem and the wrong tool for another.

https://www.youtube.com/watch?v=y8OnoxKotPQ

wowohwow 7 hours ago | parent | next [-]

>We must be precise in our language

I am talking about using a shared software repository as a dependency. Which is valid for a microservice. Taking said dependency does not turn a microservice into a monoloth.

It may be a build time dependency that you do in isolation in a completely unrelated microservice for the pure purpose of building and compiling your business microservice. It is still a dependency. You cannot avoid dependencies in software or life. As Carl Sagan said, to bake an apple pie from scratch, you must first invent the universe.

>The worst systems I have ever worked on were "microservices" with shared libraries.

Ok? How is this relevant to my point? I am only referring to the manner in which your microservice is referencing said libraries. Not the pros or cons of implementing or using shared libraries (e.g mycompany-specific-utils), common libraries (e.g apache-commons), or any software component for that matter

>Yes, exactly

So you're agreeing that there is no such thing as a microservice. If that's the case, then the term is pointless other than a description of an aspirational yet unattainable state. Which is my point exactly. For the purposes of the exercise described the software is a microservice.

magicalhippo 6 hours ago | parent [-]

> Taking said dependency does not turn a microservice into a monoloth.

True. However one of the core tenets of microservices is that they should be independently deployable[1][2].

If taking on such a shared dependency does not interfere with them being independently deployable then all is good and you still have a set of microservices.

However if that shared dependency couples the services so that if one needs a new version of the shared dependency then all do, well you suddenly those services are no longer microservices but a distributed monolith.

[1]: https://martinfowler.com/microservices/

[2]: https://www.oreilly.com/content/a-quick-and-simple-definitio...

abernard1 6 hours ago | parent | prev [-]

> The "micro" in "microservice" doesn't refer to how it is deployed, it refers to how the service is "micro" in responsibility.

The "micro" in microservice was a marketing term to distinguish it from the bad taste of particular SOA technology implementations in the 2000s. A similar type of activity as crypto being a "year 3000 technology."

The irony is it was the common state that "services" weren't part of a distributed monolith. Services which were too big were still separately deployable. When services became nothing but an HTTP interface over a database entity, that's when things became complicated via orchestration; orchestration previously done by a service... not done to a service.

AndrewKemendo 9 hours ago | parent | prev [-]

And if my grandmother had wheels she would be a bike

There are categories and ontologies are real in the world. If you create one thing and call it something else that doesn’t mean the definition of “something else” should change

By your definition it is impossible to create a state based on coherent specifications because most states don’t align to the specification.

We know for a fact that’s wrong via functional programming, state machines, and formal verification