| ▲ | 9dev 2 days ago |
| It's sure a corny stance to hold if you're navigating an infrastructure nightmare daily, but in my opinion, much of the complexity addresses not technical, but organisational issues: You want straightforward, self-contained deployments for one, instead of uploading files onto your single server. If the process crashes or your harddisk dies, you want redundancy so even those twelve customers can still access the application. You want a CI pipeline, so the junior developer can't just break prod because they forgot to run the tests before pushing. You want proper secret management, so the database credentials aren't just accessible to everyone. You want a caching layer, so you're not surprised by a rogue SQL query that takes way too long, or a surge of users that exhaust the database connections because you never bothered to add proper pooling. Adding guardrails to protect your team from itself mandates some complexity, but just hand-waving that away as unnecessary is a bad answer. At least if you're not working as part of a team. |
|
| ▲ | macspoofing a day ago | parent | next [-] |
| >It's sure a corny stance to hold if you're navigating an infrastructure nightmare daily, but in my opinion, much of the complexity addresses not technical, but organisational issues: You want straightforward, self-contained deployments for one, instead of uploading files onto your single server ... You can get all that with a monolith server and a Postgres backend. |
| |
| ▲ | benterix a day ago | parent | next [-] | | With time, I discovered something interesting: for us, techies, using container orchestration is about reliability, zero-downtime deployments, limiting blast radius etc. But for management, it's completely different. It's all about managing complexity on an organizational level. It's so much easier to think in terms "Team 1 is in charge of microservice A". And I know from experience that it works decently enough, at least in some orgs with competent management. | | |
| ▲ | kace91 a day ago | parent | next [-] | | It’s not a management thing. I’m an engineer and I think it’s THE main advantage micro services actually provide: they split your code hard and allow a team to actually get ownership of the domain. No crossing domain boundaries, no in between shared code, etc. I know: it’s ridiculous to have an architectural barrier for an organizational reason, and the cost of a bad slice multiplies. I still think in some situations, that is better to the gas-station-bathroom effect of shared codebases. | | |
| ▲ | strken a day ago | parent | next [-] | | I don't see why it's ridiculous to have an architectural barrier for org reasons. Requiring every component to be behind a network call seems like overkill in nearly all cases, but encapsulating complexity into a library where domain experts can maintain it is how most software gets built. You've got to lock those demons away where they can't affect the rest of the users. | | |
| ▲ | vbezhenar a day ago | parent | next [-] | | The problem is, that library usually does not provide good enough boundaries. C library can just shit over your process memory. Java library can cause all the hell over your objects with reflection, can just call System.exit(LOL). Minimal boundary to keep demons at bay is process boundary and you need some way for processes to talk to each other. If you're separating components into processes, it's very natural to put them to different machines, so you need your IPC to be network calls. One more step and you're implementing REST, because infra people love HTTP. | | |
| ▲ | sevensor a day ago | parent | next [-] | | > it's very natural to put them to different machines, so you need your IPC to be network calls But why is this natural? I’m not saying we shouldn’t have network RPC, but it’s not obvious to me that we should have only network RPC when there are cheap local IPC mechanisms. | | |
| ▲ | vbezhenar a day ago | parent [-] | | Because horizontal scaling is the best scaling method. Moving services to different machines is the easiest way to scale. Of course you can keep them in the same machine until you actually need to scale (may be forever), but it makes sense to make some architectural decisions early, which would not prevent scaling in the future, if the need arises. Premature optimisation is the root of all evil. But premature pessimisation is not a good thing either. You should keep options open, unless you have a good reason not to do so. If your IPC involves moving gigabytes of transient data between components, may be it's a good thing to use shared memory. But usually that's not required. | | |
| ▲ | strken 21 hours ago | parent [-] | | I'm not sure I see that horizontally scaling necessarily requires a network call between two hosts. If you have an API gateway service, a user auth service, a projects service, and a search service, then some of them will be lightweight enough that they can reasonably run on the same host together. If you deploy the user auth and projects services together then you can horizontally scale the number of hosts they're deployed on without introducing a network call between them. This is somewhat common in containerisation where e.g. Kubernetes lets you set up sidecars for logging and so on, but I suspect it could go a lot further. Many microservices aren't doing big fan-out calls and don't require much in the way of hardware. |
|
| |
| ▲ | pjmlp a day ago | parent | prev [-] | | And then we're back to 1980's UNIX process model before wide adoption of dynamic loading, but because we need to be cool we call them microservices. |
| |
| ▲ | kace91 a day ago | parent | prev [-] | | >Requiring every component to be behind a network call seems like overkill in nearly all cases That’s what I was referring to, sorry for the inaccurate adjective. Most people try to split a monolith in domains, move code as libraries, or stuff like that - but IMO you rarely avoid a shared space importing the subdomains, with blurry/leaky boundaries, and with ownership falling between the cracks. Micro services predispose better to avoid that shared space, as there is less expectation of an orchestrating common space. But as you say the cost is ridiculous. I think there’s an unfilled space for an architectural design that somehow enforces boundaries and avoids common spaces as strongly as microservices do, without the physical separation. | | |
| ▲ | sevensor a day ago | parent [-] | | How about old fashioned interprocess communication? You can have separate codebases, written in different languages, with different responsibilities, running on the same computer. Way fewer moving parts than RPC over a network. |
|
| |
| ▲ | pjc50 a day ago | parent | prev | next [-] | | That was the original Amazon motivation, and it makes sense. Conway's law. A hundred developers on a single codebase needs significant discipline. But that doesn't warrant its use in smaller organizations, or for smaller deployments. | |
| ▲ | saulpw a day ago | parent | prev | next [-] | | Conway's Law: Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations. | |
| ▲ | pjmlp a day ago | parent | prev | next [-] | | Libraries do exist, unfortunely too many developers apparently never learn about code modularity. | |
| ▲ | immibis a day ago | parent | prev [-] | | And then you have some other group of people that sees all the redundancy and decides to implement a single unified platform on which all the microservices shall be deployed. |
| |
| ▲ | embedding-shape a day ago | parent | prev | next [-] | | > using container orchestration is about reliability, zero-downtime deployments I think that's the first time I've heard any "techie" say we use containers because of reliability or zero-downtime deployments, those feel like they have nothing to do with each other, and we've been building reliable server-side software with zero-downtime deployments long before containers became the "go-to", and if anything it was easier before containers. | | |
| ▲ | benterix a day ago | parent [-] | | It would be interesting to hear your story, mine is that containers in general start an order of magnitude faster than vms (in general! we can easily find edge cases) and hence e.g. horizontal scaling is faster. You say it was easier before containers, I say k8s in spite of its complexity is a huge blessing as teams can upgrade their own parts independently and do things like canary releases easily with automated rollbacks etc. It's so much faster than VMs or bare metal (which I still use a lot and don't plan to abandon anytime soon but I understand their limitations). | | |
| ▲ | embedding-shape 11 hours ago | parent [-] | | In general, my experience is "the more moving parts == less reliable", if I were to generalize across two decades of running web services. The most reliable platforms I've helped manage has been platforms that tried to avoid adding extra complexity until they really couldn't avoid it, and when I left still deployed applications by copy a built binary to a Linux host, reload the systemd service, switch the port in the proxy and let traffic hit the new service while healtchecking, and when green, switch over and stop the old service. Deploys usually took minutes (unless something was broken), scaling worked the same as if you were using anything else, increase a number and redeploy, and no Kubernetes, Docker or even containers as far as the eye could see. |
|
| |
| ▲ | Towaway69 a day ago | parent | prev | next [-] | | As soon there is more than one container to organise, it becomes a management task for said techies. Then suddenly one realises that techies can also be bad at management. Management of a container environment not only requires deployment skills but also documentational and communication skills. Suddenly it’s not management rather the techie that can't manage their tech stack. This pointing of fingers at management is rather repetitive and simplistic but also very common. | |
| ▲ | a day ago | parent | prev [-] | | [deleted] |
| |
| ▲ | 9dev a day ago | parent | prev | next [-] | | You don't. When your server crashes, your availability is zero. It might crash because of a myriad of reasons; at some times, you might need to update the kernel to patch a security issue for example, and are forced to take your app down yourself. If your business can afford irregular downtime, by all means, go for it. Otherwise, you'll need to take precautions, and that will invariably make the system more complex than that. | | |
| ▲ | macspoofing a day ago | parent | next [-] | | >You don't. When your server crashes, your availability is zero. As your business needs grow, you can start layering complexity on top. The point is you don't start at 11 with a overly complex architecture. In your example, if your server crashes, just make sure you have some sort of automatic restart. In practice that may mean a downtime of seconds for your 12 users. Is that more complexity? Sure - but not much. If you need to take your service down for maintenance, you notify your 12 users and schedule it for 2am ... etc. Later you could create a secondary cluster and stick a load-balancer in-front. You could also add a secondary replicated PostgreSQL instance. So the monolith/postgres architecture can actually take you far as your business grows. | | |
| ▲ | BillinghamJ a day ago | parent [-] | | Changing/layering architecture adds risk. If you've got a standard way of working you can easily throw in on day one whose fundamentals then don't need to be changed for years, that's way lower risk, easier, faster It is common for founding engineers to start with a preexisting way of working that they import from their previous more-scaled company, and that approach is refined and compounded over time It does mean starting with more than is necessary at the start, but that doesn't mean it has to be particularly complex. It means you start with heaps of already-solved problems that you simply never have to deal with, allowing focus on the product goals and deep technical investments that need to be specific to the new company |
| |
| ▲ | wouldbecouldbe a day ago | parent | prev | next [-] | | Yeah theoretically that sounds good. But I had more downtime through cloud outages, Kubernetes updates then I ever had using simple linux server with nginx on hardware; most outages I had on linux was with my VPS was due to Digital Ocean issue with their own hardware failures. AWS was down not so long ago. And if certain servers do get very important you just run a backup server with VPS and switch over DNS (even if you keep a high ttl, most servers update within minutes nowadays) or if you want to be fancy throw a load balancer in front of it. If you solve issues in a few minutes people are always thankful, and most dont notice. With complicated setups it tends to take much longer before figuring out what the issue is in the first place. | |
| ▲ | danmaz74 a day ago | parent | prev | next [-] | | You can have redundancy with a monolithic architecture. Just have two different web server behind a proxy, and use postgres with a hot standby (or use a managed postgres instance which already has that). | |
| ▲ | pjmlp a day ago | parent | prev | next [-] | | Well, load balancers are an option. | | |
| ▲ | 9dev a day ago | parent [-] | | They are: But now you've expanded the definition of "a single monolith with postgres" to multiple replicas that need to be updated in sync, you've suddenly got shared state across multiple, fully isolated processes (in the best case) or running on multiple nodes (in the worst case), and a myriad of other subtle gotchas you need to account for, which raises the overall complexity considerably. | | |
| |
| ▲ | sfn42 a day ago | parent | prev [-] | | I don't see how you solve this with microservices. You'll have to take down your services in these situations too, a monolith vs microservices soup has the exact same problem. Also in 5 years of working on both microservicy systems and monoliths, not once has these things you describe been a problem for me. Everything I've hosted in Azure has been perfectly available pretty much all the time unless a developer messed up or Azure itself has downtime that would have taken down either kind of app anyway. But sure let's make our app 100 times more complicated because maybe some time in the next 10 years the complexity might save us an hour of downtime. I'd say it's more likely the added complexity will cause more downtime than it saves. | | |
| ▲ | 9dev a day ago | parent [-] | | > I don't see how you solve this with microservices. I don't think I implied that microservices are the solution, really. You can have a replicated monolith, but that absolutely adds complexity of its own. > But sure let's make our app 100 times more complicated because maybe some time in the next 10 years the complexity might save us an hour of downtime. Adding replicas and load balancing doesn't have to be a hundred times more complex. > I'd say it's more likely the added complexity will cause more downtime than it saves. As I said before, this is an assessment you will need to make for your use case, and balance uptime requirements against your complexity budget; either answer is valid, as long as you feel confident with it. Only a Sith believes in absolutes. |
|
| |
| ▲ | tnel77 a day ago | parent | prev | next [-] | | In this job market, how am I supposed to get hired without the latest buzzwords on my resume? I can’t just have monolithic server and Postgres! (Sarcasm) | | |
| ▲ | spoiler a day ago | parent | next [-] | | You're sarcastic, but heavens above, have I had some cringe interviews in my last round of interviews, and most of the absurdity came from smaller start-ups too | |
| ▲ | chistev a day ago | parent | prev [-] | | Indicating sarcasm ruins the sarcasm | | |
| ▲ | tnel77 a day ago | parent | next [-] | | Sadly, it is missed on a lot of people. Without the disclaimer, I would then have a bunch of serious replies “educating” me about my life choices. | |
| ▲ | reactordev a day ago | parent | prev | next [-] | | Squint or pretend it’s not there. This crowd is hit or miss on picking it up o’ naturál. | |
| ▲ | sfn42 a day ago | parent | prev [-] | | If you don't make it clear people will think you're serious. Sarcasm doesn't work online, If I write something like "Donald Trump is the best president ever" you don't have any way of knowing whether I'm being sarcastic or I'm just really really stupid. Only people who know me can make that judgement, and basically nobody on here knows me. So I either have to avoid sarcasm or make it clear that I'm being sarcastic. | | |
|
| |
| ▲ | YetAnotherNick a day ago | parent | prev [-] | | Most times it isn't complexity that bites, it is the brittleness. It's much easier to work with bad but well documented solution(e.g github actions) where all the issues have been faced by users and workaround is documented by community, rather than rolling out your own(e.g. simple script based CI/CD). |
|
|
| ▲ | isodev 2 days ago | parent | prev | next [-] |
| I'm not sure why your architecture needs to be complex to support CI pipelines and proper workflow for change management. And some of these guidelines have grown into satus quo common recipes. Take your starting database for example, the guideline is always "sqlite only for testing, but for production you want Postgres" - it's misleading and absolutely unnecessary. These defaults have also become embedded into PaaS services e.g. the likes of Fly or Scaleway - having a disk attached to a VM instance where you can write data is never a default and usually complicated or expensive to setup. All while there is nothing wrong with a disk that gets backed up - it can support most modern mid sized apps out there before you need block storage and what not. |
| |
| ▲ | 9dev a day ago | parent | next [-] | | I've been involved in bootstrapping the infrastructure for several companies. You always start small, and add more components over time. I dare say, on the projects I was involved, we were fairly successful in balancing complexity, but some things really just make sense. Using a container orchestration tool spares you from tending to actual Linux servers, for example, that need updates and firewalls and IP addresses and managing SSH keys properly. The complexity is still there, but it shifts somewhere else. Looking at the big picture, that might mean your knowledge requirements ease on the systems administration stuff, and tighten on the cloud provider/IaC end; that might be a good trade off if you're working with a team of younger software engineers that don't have a strong Linux background, for example, which I assume is pretty common these days. Or, consider redundancy: Your customers likely expect your service to not have an outage. That's a simple requirement, but very hard to get right, especially if you're using a single server that provides your application. Just introducing multiple copies of the app running in parallel comes with changes required in the app (you can't assume replica #1 will handle the first and second request—except if you jump through sticky session hoops, which is a rabbit hole on its own), in your networking (HTTP requests to the domain must be sent to multiple destinations), and your deployment process (artefacts must go to multiple places, restarts need to be choreographed). Many teams (in my experience) that have a disdain for complex solutions will choose their own, bespoke way of solving these issues one by one, only to end up in a corner of their own making. I guess what I'm saying is pretty mundane actually—solve the right problem at the right time, but no later. | |
| ▲ | zelphirkalt a day ago | parent | prev | next [-] | | Having recently built a Django app, I feel like I need to highlight the issues coming with using sqlite. Once you get into many to many relationships in your model, suddenly all kinds of things are not supported by sqlite, while they are when you use postgres. This also shows, that you actually cannot (!) use sqlite for testing, because it behaves significantly differently from postgres. So I think now: Unless you have a really really simple model and app, you are just better off simply starting postgres or a postgres container. | | |
| ▲ | isodev a day ago | parent [-] | | My comment is that this is a choice that should be made for each project depending on what you’re building - does your model require features not supported by SQLite or Postgres etc. > Unless you have a really really simple model and app And this is the wrong conclusion. I have a really really complex model that works just fine with SQlite. So it’s not about how complex the model is, it’s about what you need. In the same way in the original post there were so many storage types, no doubt because of such “common knowledge guidelines” | | |
| ▲ | zelphirkalt a day ago | parent [-] | | OK, well, you don't always know all requirements ahead of time. When I do find out about them later on, I don't want to have to switch database backend then. For example initially I thought I would avoid those many to many relationships all together ... But turned out to be the most fitting way to do what I needed to do in Django. I guess you could say "use sqlite as long as it lends itself well to what you are doing", sure. But when do you switch? At the first inconvenience? Or do you wait a while, until N inconveniences have been put into the codebase? And not to forget, the organizational resistance to things like changing the database. People not in the know (mangement usually) might question your plan to switch the database, because this workaround for this small little inconvenience _right now_ seems much less work and less risky for production ... Before you know it, you will have 10 workarounds in there, and sunken cost fallacy. I may be exaggerating a little bit, but it's not like this is a crazy to imagine picture I am painting here. | | |
| ▲ | isodev a day ago | parent [-] | | You're right, and it's ok to lean on experience to anticipate certain constraints for a project. My point really is that is just not an absolute default and it should not be included as a "general guideline" or recommendation in documentation, tutorial and blogposts. There is also a substantial difference between SMEs and bigger corporate situations where architecture changes are practically religious. Changing the database can create friction, but at that moment you can also ask yourself: What is the cost of adding/learning this giant stateful component with maintenance needs (postgres) vs. say adapting our schema to be more compatible with what we have? (e.g. the lightweight and much cheaper sqlite, but the argument works for whatever you already have). I'd much rather see folks thinking about that. Same for caching and CDNs and whatever Cloudflare is selling this week to hook people on their platform (e.g. DDoS/API gateway protections come in many variants, we're not all 1password and sometimes it's ok to just turn on the firewall from your hosting provider). |
|
|
| |
| ▲ | hinkley a day ago | parent | prev [-] | | Years ago we had someone who wanted to make sure that two deployments were mutually exclusive. Can’t recall why now, but something with a test environment and bootstrapping so no redundancy. I just set one build agent up with a tag that both plans required. The simplest thing that could possibly work. |
|
|
| ▲ | Freak_NL 2 days ago | parent | prev | next [-] |
| > You want a CI pipeline, so the junior developer can't just break prod because they forgot to run the tests before pushing. Make them part of your build first. Tagging a release? Have a documented process (checklist) that says 'run this, do that'. Like how in a Java Maven build you would execute `mvn release:prepare` and `mvn release:perform`, which will execute all tests as well as do the git tagging and anything else that needs doing. Scale up to a CI pipeline once that works. It is step one for doing that anyway. |
| |
| ▲ | BlindEyeHalo 2 days ago | parent [-] | | Why not do a CI pipeline from the beginning instead of relying on trust that no one ever forgets to run a check, considering adding CI is trivial with gitlab or github. | | |
| ▲ | 9dev a day ago | parent | next [-] | | Because it adds friction, and whoever introduces that CI pipeline will be the one getting messages from annoyed developers, saying "your pipeline isn't working again". It's definitely a source of complexity on its own, so something you want to consider first. | | |
| ▲ | spoiler a day ago | parent | next [-] | | U agree it adds a bit of complexity, but all code adds complexity. Maybe interacted with CIs too much and it's Stockholm syndrome, but they are there to help tame and offload complexity, not just complexity for complexity'a sake | | |
| ▲ | 9dev a day ago | parent [-] | | > they are there to help tame and offload complexity, not just complexity for complexity'a sake Theoretically. Practically, you're hunting for the reason why your GitHub token doesn't allow you to install a private package from another repository in your org during the build, then you learn you need a classic personal access token tied to an individual user account to interact with GitHub's own package registry, you decide that that sounds brittle and after some pondering, you figure that you can just create a GitHub app that you install in your org and write a small action that uses the GitHub API to create an on-demand token with the correct scopes, and you just need to bundle that so you can use it in your pipeline, but that requires a node_modules folder in your repository, and… Oh! Could it be that you just added complexity for complexity's sake? | | |
| ▲ | spoiler a day ago | parent [-] | | Uh, I've used the package repository with GitHub, and I don't remember having to do this! So, I'm not entirely sure what's happening here. I think this might be accidental complexity because there's probably a misconfiguration somewhere... But on that point I agree, initial set-up can be extremely dauntin due to the amoun of different technologies that interact, and requires a level of familiarity that most people don't want to have with these tools. which is understandable; they're a means to an end and Devs don't really enjoys playing with them (DevOps do tho!). I've had to wear many hats in my career, and was the unofficial dedicated DevOps guy in a few teams, so for better or worse had to grow familiar with them. Often (not always) there's an easier way out, but spotting it through the bushes of documentation and overgrown configuration can be annoying. |
|
| |
| ▲ | pjc50 a day ago | parent | prev [-] | | I'm aware of how much overhead CI pipelines can be, especially for multiple platforms and architectures, but at the same time developing for N>1 developers without some sort of CI feels like developing without version control: it's like coming to work without your trousers on. | | |
| ▲ | 9dev a day ago | parent [-] | | Yeah, that was my entire point really—there's some complexity that's just warranted. It's similar to a proper risk assessment analysis: The goal isn't to avoid all possible risks, but accepting some risk factors as long as you can justify them properly. As long as you're pragmatic and honest with what you need from your CI setup, it's okay that it makes your system more complex—you're getting something in return after all. |
|
| |
| ▲ | fragmede a day ago | parent | prev [-] | | Because then you're wasting time trying to quote bash inside of yaml juuuust right to get the runners to DTRT. Okay no but seriously, if you're not being held back by how slow GitHub CI/Gitlab runners are,
great! For others they're slow as molasses and others in different languages with different build systems can run an iteration of their build REPL before git has even finished pushing, nevermind waiting for a runner. |
|
|
|
| ▲ | pjc50 2 days ago | parent | prev | next [-] |
| I think that's a slightly different set of things to what OP is complaining about though. They're much more reasonable, but also "outside" of the application. Having secret management or CI (pretty much mandatory!) does not dictate the architecture of the application at all. (except the caching layer. Remember the three hard problems of computer science, of which cache invalidation is one.) Still hoping for a good "steelman" demonstration of microservices for something that isn't FAANG-sized. |
| |
| ▲ | 9dev a day ago | parent | next [-] | | > Having secret management or CI (pretty much mandatory!) does not dictate the architecture of the application at all. Oh, it absolutely does. You need some way to get your secrets into the application, at build- or at runtime, for one, without compromising security. There's a lot of subtle catches here that can be avoided by picking standard tooling instead of making it yourself, but doing so definitely shapes your architecture. | | |
| ▲ | zelphirkalt a day ago | parent [-] | | It really shouldn't. Getting the secrets in place should be done by otherwise unrelated tooling. Your apps or services should rely on the secrets being in place at start time. Often it is a matter of rendering a file at deployment time and the jobs of putting the secrets there is the job of the CI, and CI invoked tools, not the job of the service itself. |
| |
| ▲ | hinkley a day ago | parent | prev [-] | | Cache invalidation is replacing one logical thing with a new version of the same logical thing. So technically that’s also naming things. Doubly so when you put them in a kv store. | | |
| ▲ | kragen a day ago | parent [-] | | That angle seems potentially insightful, and I'm going to have to think about it, but to me, cache invalidation seems more like replacing one logical thing with nothing. It may or may not get replaced with a new version of the same logical thing later if that's required. | | |
| ▲ | cluckindan a day ago | parent [-] | | To me, cache invalidation is not strictly about either replacing or removing cache entries. Rather, cache invalidation is the process of determining which cache entries are stale and need to be replaced/removed. It gets hairy when determining that depends on users, user group memberships AND per-user permissions, access TTL, multiple types of timestamps and/or revision numbering, and especially when the cache entries are composite as in contain data from multiple database entities, where some are e.g. representing a hierarchy and may not even have direct entity relationships with the cached data. | | |
| ▲ | kragen a day ago | parent [-] | | Yes—and, in many cases, ensuring that you don't use entries which become outdated during your computation. | | |
| ▲ | cluckindan a day ago | parent [-] | | A bit of TOCTOU sprinkled in the cache integration ensures a fun day at the races! | | |
| ▲ | kragen a day ago | parent [-] | | TOCTOU bugs are a subset of cache invalidation bugs. | | |
| ▲ | cluckindan 15 hours ago | parent [-] | | Are they really? TOCTOU is a trigger for race conditions, but I guess the result of the check is a cached value. Then again, the issue in TOCTOU is that the ”cached value” is not invalidated at all, or inadequately. It doesn’t really have anything to do with the invalidation mechanism, it is downstream from it. |
|
|
|
|
|
|
|
|
| ▲ | omnicognate 2 days ago | parent | prev | next [-] |
| Conway's Law: > Organizations which design systems... are constrained to produce designs which are copies of the communication structures of these organizations. |
| |
| ▲ | whilenot-dev a day ago | parent | next [-] | | Tell this to a company of 4 engineers that created a system with 40 microservices, deployed as one VM image, to be running on 1 machine. | | |
| ▲ | noir_lord a day ago | parent | next [-] | | They wouldn't have time to hear it because they'd be trying to fix their local dev environment. I worked for a company that had done pretty much that - not fun at all (for extra fun half the microservices where in a language only half the dev team had even passing familiarity with). You need someone in charge with "taste" enough to not allow that to happen or it can happen. | |
| ▲ | omnicognate a day ago | parent | prev [-] | | LOL, perhaps the communication structure there was "silent, internalised turmoil". | | |
| ▲ | whilenot-dev a day ago | parent [-] | | Probably =), or Conway's law was always about the lower ends of the communication nodes in a company graph. I think it's time we also always include the upper ends of our cognitive limits of multitasking when we design systems in relation to organization structures. |
|
| |
| ▲ | hinkley a day ago | parent | prev [-] | | Lesser known trick: reorganize your teams so the code isn’t batshit. | | |
| ▲ | noir_lord a day ago | parent [-] | | That does imply that the people in the business with the authority to do that know how to do that and they in my experience don't - they can't solve a problem they don't understand and are unwilling to delegate it to someone who can understand it. The same pattern repeats across multiple companies - it comes down to trust and delegation, if the people with the power are unwilling to delegate bad things happen. |
|
|
|
| ▲ | lelanthran a day ago | parent | prev | next [-] |
| > If the process crashes or your harddisk dies, you want redundancy so even those twelve customers can still access the application. That's fine, 6 of them are test accounts :-) > It's sure a corny stance to hold if you're navigating an infrastructure nightmare daily, but in my opinion, much of the complexity addresses not technical, but organisational issues If you have an entire organisation dedicated to 6 users, those users had better be ultra profitable. > If the process crashes or your harddisk dies, you want redundancy so even those twelve customers can still access the application Can be done simply by a sole company owner; no need for tools that make sense in an organisation (K8s, etc) > You want a CI pipeline, so the junior developer can't just break prod because they forgot to run the tests before pushing. A deployment script that includes test runners is fine for focused product. You can even do it using a green/blue strategy if you can afford the extra $5-$10/m for an extra VPS. > You want proper secret management, so the database credentials aren't just accessible to everyone. Sure, but you don't need to deploy a full-on secrets-manager product for this. > You want a caching layer, so you're not surprised by a rogue SQL query that takes way too long, or a surge of users that exhaust the database connections because you never bothered to add proper pooling. Meh. The caching layer is not to protect you against rogue SQL queries taking too long; that's not what a cache is for, after all. As for proper pooling, what's wrong with using the pool that came with your tech stack? Do you really need to spend time setting up a different product for pooling? > dding guardrails to protect your team from itself mandates some complexity, but just hand-waving that away as unnecessary is a bad answer. I agree with that; the key is knowing when those things are needed, and TBH unless you're doing a B2C product, or have an extremely large B2B client, those things are unnecessary. Whatever happened to "profile, then optimise"? |
|
| ▲ | zelphirkalt a day ago | parent | prev [-] |
| Sure, but most of that doesn't make it into the final production thing on the server. CI? Nope. Tests? Nope. The management of the secrets (not the secrets themselves)? Nope. Caching? OK that one does. Rate limits? Maybe, but could be another layer outside the normal services' implementation. |