| ▲ | YZF 18 hours ago |
| If your application doesn't need and likely won't need to scale to large clusters, or multiple clusters, then there's nothing wrong per se. with your solution. I don't think k8s is that hard but there are a lot of moving pieces and there's a bit to learn. Finding someone with experience to help you can make a ton of difference. Questions worth asking: - Do you need a load balancer? - TLS certs and rotation? - Horizontal scalability. - HA/DR - dev/stage/production + being able to test/stage your complete stack on demand. - CI/CD integrations, tools like ArgoCD or Spinnaker - Monitoring and/or alerting with Prometheus and Grafana - Would you benefit from being able to deploy a lot of off the shelf software (lessay Elastic Search, or some random database, or a monitoring stack) via helm quickly/easily. - "Ingress"/proxy. - DNS integrations. If you answer yes to many of those questions there's really no better alternative than k8s. If you're building large enough scale web applications the almost to most of these will end up being yes at some point. |
|
| ▲ | xorcist 16 hours ago | parent | next [-] |
| Every item on that list is "boring" tech. Approximately everyone have used load balancers, test environments and monitoring since the 90s just fine. What is it that you think make Kubernetes especially suited for this compared to every other solution during the past three decades? There are good reasons to use Kubernetes, mainly if you are using public clouds and want to avoid lock-in. I may be partial, since managing it pays my bills. But it is complex, mostly unnecessarily so, and no one should be able to say with a straight face that it achieves better uptime or requires less personnel than any alternative. That's just sales talk, and should be a big warning sign. |
| |
| ▲ | YZF 15 hours ago | parent | next [-] | | It's the way things work together. If you want to add a new service you just annotate that service and DNS gets updated, your ingress gets the route added, cert-manager gets you the certs from let's encrypt. You want Prometheus to monitor your pod you just add the right annotation. When your server goes down k8s will move your pod around. k8s storage will take care of having the storage follow your pod. Your entire configuration is highly available and replicated in etcd. It's just very different than your legacy "standard" technology. | | |
| ▲ | gr3ml1n 13 hours ago | parent [-] | | None of this is difficult to do or automate, and we've done it for years. Kubernetes simply makes it more complex by adding additional abstractions in the pursuit of pretending hardware doesn't exist. There are, maybe, a dozen companies in the world with a large enough physical footprint where Kubernetes might make sense. Everyone else is either engaged in resume-driven development, or has gone down some profoundly wrong path with their application architecture to where it is somehow the lesser evil. | | |
| ▲ | sampullman 12 hours ago | parent [-] | | I used to feel the same way, but have come around. I think it's great for small companies for a few reasons. I can spin up effectively identical dev/ci/stg/prod clusters for a new project in an hour for a medium sized project, with CD in addition to everything GP mentioned. I basically don't have to think about ops anymore until something exotic comes up, it's nice. I agree that it feels clunky, and it was annoying to learn, but once you have something working it's a huge time saver. The ability to scale without drastically changing the system is a bonus. | | |
| ▲ | gr3ml1n 11 hours ago | parent | next [-] | | > I can spin up effectively identical dev/ci/stg/prod clusters for a new project in an hour for a medium sized project, with CD in addition to everything GP mentioned. I can do the same thing with `make local` invoking a few bash commands. If the complexity increases beyond that, a mistake has been made. | |
| ▲ | xorcist 7 hours ago | parent | prev [-] | | You could say the same thing about Ansible or Vagrant or Nomad or Salt or anything else. I can say with complete confidence however, that if you are running Kubernetes and not thinking about ops, you are simply not operating it yourself. You are paying someone else to think about it for you. Which is fine, but says nothing about the technology. |
|
|
| |
| ▲ | lmm 10 hours ago | parent | prev | next [-] | | > Every item on that list is "boring" tech. Approximately everyone have used load balancers, test environments and monitoring since the 90s just fine. What is it that you think make Kubernetes especially suited for this compared to every other solution during the past three decades? You could make the same argument against using cloud at all, or against using CI. The point of Kubernetes isn't to make those things possible, it's to make them easy and consistent. | | |
| ▲ | drw85 3 hours ago | parent | next [-] | | But none of those things are easy.
All cloud environments are fairly complex and kubernetes is not something that you just do in an afternoon. You need to learn about how it works, which takes about the same time as using 'simpler' means to do things directly. Sure, it means that two people that already understand k8s can easily exchange or handover a project, which might be harder to understand if done with other means. But that's about the only bonus it brings in most situations. | |
| ▲ | eadmund an hour ago | parent | prev [-] | | > The point of Kubernetes isn't to make those things possible, it's to make them easy and consistent. Kubernetes definitely makes things consistent, but I do not think that it makes them easy. There’s certainly a lot to learn from Kubernetes, but I strongly believe that a more tasteful successor is possible, and I hope that it is inevitable. |
| |
| ▲ | threeseed 12 hours ago | parent | prev | next [-] | | Kubernetes is boring tech as well. And the advantage of it is one way to manage resources, scaling, logging, observability, hardware etc. All of which is stored in Git and so audited, reviewed, versioned, tested etc in exactly the same way. | |
| ▲ | andreasmetsala 4 hours ago | parent | prev [-] | | > But it is complex, mostly unnecessarily so Unnecessary complexity sounds like something that should be fixed. Can you give an example? |
|
|
| ▲ | otabdeveloper4 11 hours ago | parent | prev | next [-] |
| Kubernetes is great example of the "second-system effect". Kubernetes only works if you have a webapp written in a slow interpreted language. For anything else it is a huge impedance mismatch with what you're actually trying to do. P.S. In the real world, Kubernetes isn't used to solve technical problems. It's used as a buffer between the dev team and the ops team, who usually have different schedules/budgets, and might even be different corporate entities. I'm sure there might be an easier way to solve that problem without dragging in Google's ridiculous and broken tech stack. |
| |
| ▲ | mrweasel 8 hours ago | parent | next [-] | | > It's used as a buffer between the dev team and the ops team, who usually have different schedules/budgets That depends on your definition. If the ops team is solely responsibly for running the Kubernetes cluster, then yes. In reality that's rarely how things turns out. Developers want Kubernetes, because.... I don't know. Ops doesn't even want Kubernetes in many cases. Kubernetes is amazing, for those few organisations that really need it. My rule of thumb is: If your worker nodes aren't entire physical hosts, then you might not need Kubernetes. I've seen some absolutely crazy setups where developers had designed this entire solution around Kubernetes, only to run one or two containers. The reasoning is pretty much always the same, they know absolutely nothing about operations, and fail to understand that load balancers exists outside of Kubernetes, or that their solution could be an nginx configuration, 100 lines of Python and some systemd configuration. I accept that I lost the fight that Kubernetes is overly complex and a nightmare to debug. In my current position I can even see some advantages to Kubernetes, so I was at least a little of in my criticism. Still I don't think Kubernetes should be your default deployment platform, unless you have very specific needs. | |
| ▲ | rixed 9 hours ago | parent | prev | next [-] | | Contrary to popular belief, k8s is not Google's tech stack. My understanding is that it was initially sold as Google's tech to benefit from Google's tech reputation (exploiting the confusion caused by the fact that some of the original k8s devs where ex-googlers), and today it's also Google trying to pose as k8s inventor, to benefit from its popularity. Interesting case of host/parasite symbiosis, it seams. Just my impression though, I can be wrong, please comment if you know more about the history of k8s. | | |
| ▲ | jonasdegendt 7 hours ago | parent [-] | | Is there anyone that works at Google that can confirm this? What's left of Borg at Google? Did the company switch to the open source Kubernetes distribution at any point? I'd love to know more about this as well. > exploiting the confusion caused by the fact that some of the original k8s devs where ex-googlers What about the fact that many active Kubernetes developers, are also active Googlers? | | |
| |
| ▲ | maxdo 10 hours ago | parent | prev [-] | | kubernetes is an API for your cluster, that is portable between providers, more or less.
there are other abstractions, but they are not portable, e.g. fly.io, DO etc.
so unless you want a vendor lock-in, you need it.
for one of my products, I had to migrate due to business reasons 4 times into different kube flavors, from self-manged ( 2 times ) to GKE and EKS. | | |
| ▲ | otabdeveloper4 10 hours ago | parent [-] | | > there are other abstractions, but they are not portable Not true. Unix itself is an API for your cluster too, like the original post implies. Personally, as a "tech lead" I use NixOS. (Yes, I am that guy.) The point is, k8s is a shitty API because it's built only for Google's "run a huge webapp built on shitty Python scripts" use case. Most people don't need this, what they actually want is some way for dev to pass the buck to ops in some way that PM's can track on a Gantt chart. |
|
|
|
| ▲ | signal11 15 hours ago | parent | prev | next [-] |
| > If you answer yes to many of those questions there's really no better alternative than k8s. This is not even close to true with even a small number of resources. The notion that k8s somehow is the only choice is right along the lines of “Java Enterprise Edition is the only choice” — ie a real failure of the imagination. For startups and teams with limited resources, DO, fly.io and render are doing lots of interesting work. But what if you can’t use them? Is k8s your only choice? Let’s say you’re a large orgs with good engineering leadership, and you have high-revenue systems where downtime isn’t okay. Also for compliance reasons public cloud isn’t okay. DNS in a tightly controlled large enterprise internal network can be handled with relatively simple microservices. Your org will likely have something already though. Dev/Stage/Production: if you can spin up instances on demand this is trivial. Also financial services and other regulated biz have been doing this for eons before k8s. Load Balancers: lots of non-k8s options exist (software and hardware appliances). Prometheus / Grafana (and things like Netdata) work very well even without k8s. Load Balancing and Ingress is definitely the most interesting piece of the puzzle. Some choose nginx or Envoy, but there’s also teams that use their own ingress solution (sometimes open-sourced!) But why would a team do this? Or more appropriately, why would their management spend on this? Answer: many don’t! But for those that do — the driver is usually cost*, availability and accountability, along with engineering capability as a secondary driver. (*cost because it’s easy to set up a mixed ability team with experienced, mid-career and new engineers for this. You don’t need a team full of kernel hackers.) It costs less than you think, it creates real accountability throughout the stack and most importantly you’ve now got a team of engineers who can rise to any reasonable challenge, and who can be cross pollinated throughout the org. In brief the goal is to have engineers not “k8s implementers” or “OpenShift implementers” or “Cloud Foundry implementers”. |
| |
| ▲ | lmm 10 hours ago | parent [-] | | > DNS in a tightly controlled large enterprise internal network can be handled with relatively simple microservices. Your org will likely have something already though. And it will likely be buggy with all sorts of edge cases. > Dev/Stage/Production: if you can spin up instances on demand this is trivial. Also financial services and other regulated biz have been doing this for eons before k8s. In my experience financial services have been notably not doing it. > Load Balancers: lots of non-k8s options exist (software and hardware appliances). The problem isn't running a load balancer with a given configuration at a given point in time. It's how you manage the required changes to load balancers and configuration as time goes on. It's very common for that to be a pile of perl scripts that add up to an ad-hoc informally specified bug-ridden implementation of half of kubernetes. | | |
| ▲ | signal11 9 hours ago | parent [-] | | > And it will likely be buggy with all sorts of edge cases. I have seen this view in corporate IT teams who’re happy to be “implementers” rather than engineers. In real life, many orgs will in fact have third party vendor products for internal DNS and cert authorities. Writing bridge APIs to these isn’t difficult and it keeps the IT guys happy. A relatively few orgs have written their own APIs, typically to manage a delegated zone. Again, you can say these must be buggy, but here’s the thing — everything’s buggy. Including k8s. As long as bugs are understood and fixed, no one cares. The proof of the pudding is how well it works. Internal DNS in particular is easy enough to control and test if you have engineers (vs implementers) in your team. > manage changes to load balancers … perl That’s a very black and white view, that teams are either on k8s (which to you is the bees knees) or a pile of Perl (presumably unmaintainable). Speaks to interesting unconscious bias. Perhaps it comes from personal experience, in which case I’m sorry you had to be part of such a team. But it’s not particularly difficult to follow modern best practices and operate your own stack. But if your starter stance is that “k8s is the only way”, no one can talk you out of your own mental hard lines. | | |
| ▲ | lmm 8 hours ago | parent [-] | | > Again, you can say these must be buggy, but here’s the thing — everything’s buggy. Including k8s. As long as bugs are understood and fixed, no one cares. Agreed, but internal products are generally buggier, because an internal product is in a kind of monopoly position. You generally want to be using a product that is subject to competition, that is a profit center rather than a cost center for the people who are making it. > Internal DNS in particular is easy enough to control and test if you have engineers (vs implementers) in your team. Your team probably aren't DNS experts, and why should they be? You're not a DNS company. If you could make a better DNS - or a better DNS-deployment integration - than the pros, you'd be selling it. (The exception is if you really are a DNS company, either because you actually do sell it, or because you have some deep integration with DNS that enables your competitive advantage) > Perhaps it comes from personal experience, in which case I’m sorry you had to be part of such a team. But it’s not particularly difficult to follow modern best practices and operate your own stack. I'd say that's a contradiction in terms, because modern best practice is to not run your own stack. I don't particularly like kubernetes qua kubernetes (indeed I'd generally pick nomad instead). But I absolutely do think you need a declarative, single-source-of-truth way of managing your full deployment, end-to-end. And if your deployment is made up of a standard load balancer (or an equivalent of one), a standard DNS, and prometheus or grafana, then you've either got one of these products or you've got an internal product that does the same thing, which is something I'm extremely skeptical of for the same reason as above - if your company was capable of creating a better solution to this standard problem, why wouldn't you be selling it? (And if an engineer was capable of creating a better solution to this standard problem, why would they work for you rather than one of the big cloud corps?) In the same way I'm very skeptical of any company with an "internal cloud" - in my experience such a thing is usually a significantly worse implementation of AWS, and, yes, is usually held together with some flaky Perl scripts. Or an internal load balancer. It's generally NIH, or at best a cost-cutting exercise which tends to show; a company might have an internal cloud that's cheaper than AWS (I've worked for one), but you'll notice the cheapness. Now again, if you really are gaining a competitive advantage from your things then it may make sense to not use a standard solution. But in that case you'll have something deeply integrated, i.e. monolithic, and that's precisely the case where you're not deploying separate standard DNS, separate standard load balancers, separate standard monitoring etc.. And in that case, as grandparent said, not using k8s makes total sense. But if you're just deploying a standard Rails (or what have you) app with a standard database, load balancer, DNS, monitoring setup? Then 95% of the time your company can't solve that problem better than the companies that are dedicated to solving that problem. Either you don't have a solution at all (beyond doing it manually), you use k8s or similar, or you NIH it. Writing custom code to solve custom problems can be smart, but writing custom code to solve standard problems usually isn't. | | |
| ▲ | fragmede 6 hours ago | parent [-] | | > if your company was capable of creating a better solution to this standard problem, why wouldn't you be selling it? Let's pretend I'm the greatest DevOps software developer engineer ever, and I write a Kubernetes replacement that's 100x better. Since it's 100x better, I simply charge 100x as much as it costs per CPU/RAM for a Kubernetes license to a 1,000 customers, and take all of that money to the bank and I deposit my check for $0. I don't disagree with the rest of the comment, but the market for the software to host a web app is a weird market. |
|
|
|
|
|
| ▲ | zug_zug 16 hours ago | parent | prev | next [-] |
| > If you answer yes to many of those questions there's really no better alternative than k8s. Nah, most of that list is basically free for any company that uses an amazon loadbalancer and an autoscale group. In terms of likeliness of incidents, time, and cost, those will each be an order of magnitude higher with a team of kubernetes engineers than less complex setup. |
|
| ▲ | psychoslave 7 hours ago | parent | prev [-] |
| Oz Nova nailed it nicely in "You Are Not Google" https://blog.bradfieldcs.com/you-are-not-google-84912cf44afb |