| ▲ | Microservices for the Benefits, Not the Hustle (2023)(wolfoliver.medium.com) |
| 24 points by WolfOliver 4 days ago | 37 comments |
| |
|
| ▲ | kukkeliskuu 2 hours ago | parent | next [-] |
| The biggest beef I have with microservice architectures is lack of transactions across service boundaries. We say that such systems-of-systems are "eventually consistent", but they are actually never guaranteed to be in a consistent state -- i.e. they are always inconsistent. That pushes the responsibility for consistency to the system that needs to use the data -- making implementing those either extremely complex -- or more typically -- ignore the problem and introduce distributed timing bugs that are difficult to find in testing. The benefits of microservices are offset by losing the ability to build on database functionality to make your systems robust. |
| |
| ▲ | SOLAR_FIELDS an hour ago | parent [-] | | The biggest beef I currently have with microservice architectures is that they are more annoying to work with when working with LLM's. Ultimately that is probably the biggest limiting factor for microservices in 2026, the tooling for multi repo setups is there (i've been using RepoPrompt for this with really good effect), but fundamentally LLM's in their default state without a purpose designed too like this suck at microservices compared to a monorepo. You could also turn around and say that it's a good context boundary for the LLM, which is true, but then you're back at the same problem microservices have always had: they push the integration work onto another team so that developers can make it Not Their Problem. Which is, honestly, just a restatement of the exact thing you just said framed in a different way. I think your statement can also be used against event driven architecture - having this massive event bus that controls all the levers of your distributed system always sounds great in theory, but in practice you end up with almost the exact same problem as what you just described, because the tooling for offering those integration guarantees is just not nearly as robust as a centralized database. | | |
| ▲ | weitendorf 36 minutes ago | parent [-] | | I have found mostly the opposite but partly the same. With the right tooling LLMs are IMO much better in microservice architectures. If you're regularly needing to do multi-repo PRs or share code between repos as they work, to me that is a sign that you weren't really "doing microservices" before adding LLMs to your project, because there should be some kind of API surface that you can share with LLMs in other repos, and cross-service changes should generally probably not be done by the same agent Even if the same dev is driving the work, it's like having a junior engineer do a cross-service staggered release and letting them skip the well-defined existing API surfaces. The entire point of microservices is that you are making that hard/introducing friction to that stuff on purpose so things can be released and developed separately. IMO it has an easy solution too, just direct one agent per repo/service the way you would if you really did need to make that kind of change anyway and wanted to do it through junior developers. > hey push the integration work onto another team so that developers can make it Not Their Problem I mean yes and no, this is oftentimes completely intended from the perspective of the people making the decision to do microservices. It's a way to constrain the way people develop and coordinate with each other precisely because you don't want all 50 of your developers running amok on the entire codebase (especially when they don't know how or why that code was structured some way originally, and they aren't very skilled or conscientious in integrating things maintainably or testing existing behavior). > so that developers can make it Not Their Problem IMO this is partially orthogonal to the problem. Microservices doesn't necessarily mean you can't modify another team's code. IMO that is a generally pretty counter productive mindset for engineering teams where codebase is jealously guarded like that. It just means you might need to send another team a PR or coordinate with them first rather than making the change unilaterally. Or maybe you just want to release the things separately; lately I find myself wanting that more and more because past a certain size agents just turn repos into balls of mud or start re implementing things. |
|
|
|
| ▲ | perrygeo 10 minutes ago | parent | prev | next [-] |
| I guess I will never understand the microservices vs monolith debate. What about just "services"? There are 1001 reasons you might want to peel off functionality into a separate service. Just do that, without making it into some philisophical debate. |
|
| ▲ | kayo_20211030 4 hours ago | parent | prev | next [-] |
| Unless you have Netflix scale, or Netflix scale problems, why bother with micro-services? Most mid-scale problems don't demand a micro-services solution, with data ownership, delineation of service responsibilities, etc. Monoliths with single-truth databases work just fine. Micro-services are an organizational convenience, and not a technology response to complexity. It's easier to manage groups of people than it is to manage complex technology. That's fine if you need it. Normally, it's not. If it works for you, sure, go ahead. If it doesn't, don't chase a topical orthodoxy. |
| |
| ▲ | weitendorf an hour ago | parent | next [-] | | 1. Principle of least privilege is very important if you are interacting with a large-ish number of third party APIs. It's not just about data ownership (ie one service per db) or even limiting blast radius (most likely you won't get hacked either way), it's eliminating a single point of failure that makes you a more attractive/risky target in case of a hack either directly on your infrastructure, or through a member of your team, or improperly wielding automation. 2. Having at least some level of ability ro run heterogenous workloads in your production environment (ie being able to flip a switch and do microservices if you decide to) is very useful if you need to do more complicated migrations or integrate OSS/vendor software/whip up a demo on short notice. Because oftentimes you may not want to "do microservices" ideologically or as a focal point for development, but you can easily end up in a situation where you want "a microservice", and there can be an unnecessarily large number of obstacles to doing that if you've built all your tooling and ops around the assumption of "never microservices" 3. If you're working with open source software products and infra a lot it's just way easier to eg launch a stalwart container to do email hosting than to figure out how to implement email hosting in your existing db and monolith. Also see above, if you find a good OSS project to help you do something much faster or more effectively, it's good for it to be easy to put it in prod. 4. Claude Code and less experienced or skilled developers don't understand separation of concerns. Now that agentic development is picking up, even orgs that didn't need the organizational convenience before may find themselves needing it now. Personally, this has been a major consideration in how I structure new projects once I became aware of the problem. | |
| ▲ | IshKebab 3 hours ago | parent | prev | next [-] | | The only reasons I've seen to use microservices: * It makes it easier to use multiple different languages. * You can easily scale different parts of your application independently. * Organisational convenience. Usually though you don't need any of that. | | |
| ▲ | teraflop 2 hours ago | parent | next [-] | | > You can easily scale different parts of your application independently. Just to add that I think some people assume this is something they need, even when there's no basis for it. Do you actually need 1 instance that handles Foo-type requests, and 99 instances that handle Bar-type requests, or would you be fine with 100 instances that are capable of handling either Foo or Bar as necessary? The distinction only really matters if there is some significant fixed overhead associated with being available to serve Foo requests, such that running an extra 99 of those processes has a cost, regardless of how much traffic they serve. For instance, if every Foo server needs GBs of static data to be held in RAM, or needs to be part of a Paxos group, or something like that. But if your services are stateless, then you probably don't benefit at all from scaling them independently. | | |
| ▲ | dilyevsky an hour ago | parent [-] | | > But if your services are stateless, then you probably don't benefit at all from scaling them independently. Quite easy to run into hardware, OS or especially language runtime (looking at you Ruby, Python) limitations when pushing even moderately high traffic even for totally stateless applications | | |
| ▲ | teraflop 24 minutes ago | parent [-] | | I don't really understand what you mean by this or how it relates to what I said. I certainly wasn't suggesting that you can vertically scale a service to handle unlimited traffic on a single instance. The point is that if you have stateless services, where the resources being consumed per request are mainly CPU and network bandwidth, then "scaling independently" is not a useful thing to do. You can just scale everything instead of artificially restricting which instances can handle which kinds of requests. |
|
| |
| ▲ | marcosdumay 2 hours ago | parent | prev | next [-] | | You don't need to split your code over a network to have organizational convenience. Your first point is valid. There are few ways to get it, and it's not clear if services are harder or easier than the alternatives. | |
| ▲ | worksonmine an hour ago | parent | prev [-] | | Sometimes there are reasons to separate projects. I wouldn't put a scraper or pdf generator with the main application. There are benefits to keeping some things small and isolated. Teams that have problems with micro services are doing it wrong. |
| |
| ▲ | MrDarcy 3 hours ago | parent | prev [-] | | The ecosystem. Micro services are the most efficient way to integrate CNCF projects deeply with your platform at any size. Edit: Genuinely curious about the downvotes here. The concept directly maps to all the reasons the article author cited. | | |
|
|
| ▲ | vaibhav2614 3 hours ago | parent | prev | next [-] |
| Microservices seem great when you're writing them, but they become unwieldy pretty fast. At the beginning, it's nice to know that team A is responsible for service B, so that if there are any bugs/feature requests, you know who to go to. This neat story falls apart if team A gets dissolved or merged with another team. The new group owning service B doesn't feel the same level of ownership. When maintenance work comes in (security vulnerabilities, stack modernization, migrations), the team is forced to internalize it rather than share it with other teams. Operational maintenance also becomes a lot harder - deploying many services correctly is much more difficult locally, in pre-prod, and in production. Any benefits you get from writing a single HTTP server quickly are erased in the time spent on integration work with the rest of the company. Here is a blog post that makes a more expansive argument against microservices:
https://www.docker.com/blog/do-you-really-need-microservices... |
| |
| ▲ | marcosdumay 2 hours ago | parent [-] | | > it's nice to know that team A is responsible for service B Yet another argument that applies better or equally well to shared libraries. I've made arguments for creating services at work. But it seems that every time somebody tries to make a reason for them at the web, it's not a reason to use services. |
|
|
| ▲ | weitendorf an hour ago | parent | prev | next [-] |
| I believe that microservices (but under a different model than K8s et al expose) are posed to make a huge comeback soon due to agentic development. My company has been investing significantly in this direction for a while, because agents need better APIs/abstractions for execution and interaction with cloud environments. Once Claude Code came out something new clicked with me regarding how agent coordination will actually end up working in practice. Unless you want to spend a time of time trying to prompt them into understanding separation of concerns (Claude Code specifically seems to often ignore these instructions/have conflicting default instructions), if you want to scale out agent-driven development you need to enforce separation of concerns at the repo-level. It's basically the same problem as it was 5-10 years ago, if you have a bunch of logic that interacts with each other across "team"/knowledge/responsibility/feature boundaries, interacting with your dependencies over an API, developing in separate repos, and building + rolling out the logic separately helps enforce separation of concerns and integration around well-specified interfaces. In an ideal world, Claude Code would not just turn every repo into a ball of mud, at least if you asked it nicely and gave it clear guidelines to follow to prevent that. That was always true with monoliths and trying to coordinate/train less experienced developers to not do the same thing when, and it turns out we didn't live in an ideal world back then so we used microservices to prevent that more structurally! History sure does rhyme. |
|
| ▲ | kukkeliskuu 2 hours ago | parent | prev | next [-] |
| It is not unheard of to encounter a situation in enterprises where microservice architecture has been "too succesful". There may be even 500 microservices and their number is growing rapidly. The situation might no longer be under control, sometimes even the responsibility for maintaining them is unclear or "shared". It is easier to implement a new microservice than track down who could implement something there. I have encountered this problem several times, so I started a side project to bring such situations under control. It is still alpha, but first part -- scoping the problem -- is already pretty useful, allowing you to select, visualize and tag etc. microservices. If anybody is interested, the code is here: https://github.com/mikko-ahonen/arch-ascent/ |
|
| ▲ | codr7 4 hours ago | parent | prev | next [-] |
| The argument here, as far as I care to understand it rn, seems to be that micro services could actually live up to the promises if you follow ALL the rules religiously. From personal experience, the problem is complexity. Which ends up costing money. At a certain scale, splitting off separate services may or may not make sense. But always building anything and everything as a set of black boxes that only communicate over network APIs, each potentially with their own database; is one of those ideas that sounds like fun until you've had a taste of the problems involved; especially if you have strong ACID requirements, or want to debug pieces in isolation. |
| |
| ▲ | 12_throw_away 21 minutes ago | parent [-] | | > micro services could actually live up to the promises if you follow ALL the rules religiously. To be fair, I think this is true of nearly everything (well, except maybe Agile). Like, yeah, a monoliths work great IF you rigorously follow software engineering best practices about isolation, coupling, concurrency, and overall project organization ... but in real life they usually turn into a tangled mess of broken abstraction boundaries and encapsulation-breaking hacks. (that said I'd still much rather untangle and refactor a poorly organized monolith than a mush of poorly factored microservices ) |
|
|
| ▲ | tossandthrow 4 hours ago | parent | prev | next [-] |
| > Purpose 1: Minimize Costs of Change The cost of change is radically increased using micro services. With microservices you scatter the business complexity across multiple services and lift a considerable amount of complexity from the easily testable code base and into the infrastructure. IMHO doing a micro service architecture for this reason is horrible. |
| |
| ▲ | simianwords 4 hours ago | parent [-] | | You are right but from a different context. In a well thought out microservice architecture, you will not have business logic scattered across multiple services. We have had instances of microservice architecture where doing one change required changes in 4 different microservices defeating the whole point. Obviously this is bad. | | |
| ▲ | vjvjvjvjghv 2 hours ago | parent [-] | | “ In a well thought out microservice architecture, you will not have business logic scattered across multiple services.” A “well thought out” architecture that holds up over years is a pipe dream. There will always be changes that require a rethinking of the whole system. |
|
|
|
| ▲ | daxfohl 4 hours ago | parent | prev | next [-] |
| IME "Risk 2: Distributed Monolith" always comes back to bite. You have a nice separation of concerns at first, but then a quarter later there's some new feature request that cuts across those boundaries, and you're forced into distributed monolith territory. Then the problem is you can't refactor it even if you wanted to, because other teams have taken dependencies on your existing APIs etc. Cleaning up becomes a risky quarter-long multi-team project instead of a straightforward sprint-long ticket. I think AI is going to reverse the microservice trend too. The main problem that microservices improves is allowing teams to work more independently. Deployments, and especially rollbacks if there's a bug, can be quick for a microservice but take lots of coordination for monoliths. With AI (once/if it gets better), I imagine project work to be a lot more serial, since they work so much faster, and it'll be able to deploy one project at a time. A lot less chance of overlapping or incompatible changes that block monolith rollouts for weeks until they're resolved. A lot less extra feature-flagging of every single little change since you can just roll back the deployment if needed. Plus, a single codebase will be a lot easier for a single AI system to understand and work with e2e. |
| |
| ▲ | theappsecguy 4 hours ago | parent | next [-] | | I am yet to see this mysterious phenomenon of AI working so much faster and better than high performing teams. What I see so far, is sloppy code, poor tests and systems that do not scale in the long room. | | |
| ▲ | daxfohl 3 hours ago | parent [-] | | Yeah, hence the "when/if". Still a few model generations off, IMO. But if it happens, I think "serialization of feature development" is going to have an outsized effect on architecture, process, planning, deployment, etc. |
| |
| ▲ | vjvjvjvjghv 2 hours ago | parent | prev | next [-] | | “ IME "Risk 2: Distributed Monolith" always comes back to bite. You have a nice separation of concerns at first, but then a quarter later there's some new feature request that cuts across those boundaries, and you're forced into distributed monolith territory.” In what way do microservices handle this better? When you have a feature request that cuts across service boundaries you have to coordinate multiple teams to change and deploy at the same time. | |
| ▲ | kayo_20211030 4 hours ago | parent | prev [-] | | First two pars are 100% correct. Third par, I'm not so sure about. The jury is still out, I feel. |
|
|
| ▲ | simianwords 4 hours ago | parent | prev | next [-] |
| Main benefit with microservices is independent deployments. In my team, we once had 5 deployments of our service in a day and we did not have to coordinate it with anyone outside the team. This is an amazing benefit. Not many realise the cost we pay due to coordination. I can't even imagine how this would work in a monolith. Maybe we would meticulously write code in our dev environment and kinda pray that it works properly in production when our code is released say once a day. Real life is more messy and it is great that I had the option to deploy 5 times to production. That fast feedback loop is much appreciated. |
| |
| ▲ | bdangubic 4 hours ago | parent [-] | | I work on a monolith and deploy 20+ times per day to production, sometimes 100+ times depending on the day. weird that you’d say benefit of microservices is that you can do 5 deployments daily to production, that would not make my top-100 list to use them | | |
|
|
| ▲ | jacquesm 2 hours ago | parent | prev [-] |
| One of the main advantages of microservices does not work with smaller teams: parallel, decoupled development. This is much easier with a service that is broken down into multiple well defined components than it is with larger chunks because the interfaces are much easier to test against. |