| ▲ | JohnMakin 12 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Having spent most of my career in kubernetes (usually managed by cloud), I always wonder when I see things like this, what is the use case or benefit of not having a control plane? To me, the control plane is the primary feature of kubernetes and one I would not want to go without. I know this describes operational overhead as a reason, but how it relates to the control plane is not clear to me. even managing a few hundred nodes and maybe 10,000 containers, relatively small - I update once a year and the managed cluster updates machine images and versions automatically. Are people trying to self host kubernetes for production cases, and that’s where this pain comes from? Sorry if it is a rude question. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | psviderski 12 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Not rude at all. The benefit is a much simpler model where you simply connect machines in a network where every machine is equal. You can add more, remove some. No need to worry about an HA 3-node centralised “cluster brain”. There isn’t one. It’s a similar experience when a cloud provider manages the control plane for you. But you have to worry about the availability when you host everything yourself. Losing etcd quorum results in an unusable cluster. Many people want to avoid this, especially when running at a smaller scale like a handful of machines. The cluster network can even partition and each partition continues to operate allowing to deploy/update apps individually. That’s essentially what we all did in a pre-k8s era with chef and ansible but without the boilerplate and reinventing the wheel, and using the learnings from k8s and friends. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | kelnos 12 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> a few hundred nodes and maybe 10,000 containers, relatively small That feels not small to me. For something I'm working on I'll probably have two nodes and around 10 containers. If it works out and I get some growth, maybe that will go up to, say, 5-7 nodes and 30 or so containers? I dunno. I'd like some orchestration there, but k8s feels way too heavy even for my "grown" case. I feel like there are potentially a lot of small businesses at this sort of scale? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | baq 12 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> Are people trying to self host kubernetes Of course they are…? That’s half the point of k8s - if you want to self host, you can, but it’s just like backups: if you never try it, you should assume you can’t do it when you need to | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | davedx 3 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> a few hundred nodes and maybe 10,000 containers, relatively small And that's just your CI jobs, right? ;) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | motoboi 10 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Kubernetes is not only an orchestrator but a scheduler. Is a way to run arbitrary processes on a bunch of servers. But what if your processes are known beforehand? Than you don't need a scheduler, nor an orchestrator. If it's just your web app with two containers and nothing more? | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | esseph 11 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Try it on bare metal where you're managing the distributed storage and the hardware and the network and the upgrades too :) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | weitendorf 4 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
I'm working on a similar project (here's the v0 of its state management and the libraries its "local control plane" will use to implement a mesh https://github.com/accretional/collector) and worked on the data plane for Google Cloud Run/Functions: IMO kubernetes is great if your job is to fiddle with Kubernetes. But damn, the overhead is insane. There is this broad swathe of middle-sized tech companies and non-tech Internet application providers (eg ecommerce, governments, logistics, etc.) that spend a lot of their employees' time operating Kubernetes clusters, and a lot of money on the compute for those clusters, which they probably overprovision and also overpay for through some kind of managed Kubernetes/hyperscaler platform + a bunch of SaaS for things like metrics and logging, container security products, alerting. A lot of these guys are spending 10-40% of their budget on compute, payroll, and SaaS to host CRUD applications that could probably run on a small number of servers without a "platform" team behind it, just a couple of developers who know what they're doing. Unless they're paying $$$ each of these deployments is running their own control plane and dealing with all the operational and cognitive overhead that entails. Most of those are running in a small number of datacenters alongside a bunch of other people running/managing/operating kubernetes clusters of their own. It's insanely wasteful because if there were a proper multitenant service mesh implementation (what I'm working on) that was easy to use, everybody could share the same control plane ~per datacenter and literally just consume the Kubernetes APIs they actually need, the ones that let them run and orchestrate/provision their application, and forget about all the fucking configuration of their cluster. BTW, that is how Borg works, which Kubernetes was hastily cobbled-together to mimic in order to capitalize on Containers Being So Hot Right Now. The vast majority of these Kubernetes users just want to run their applications, their customers don't know or care that Kubernetes is in the picture at all, and the people writing the checks would LOVE to not be spending so much and money on the same platform engineering problems as every other midsize company on the Internet. > what is the use case or benefit of not having a control plane? All that is to say, it's not having to pay for a bunch of control plane nodes and SaaS and a Kubernetes guy/platform team. At small and medium scales, it's running a bunch of container instances as long as possible without embarking on a 6-24mo, $100k-$10m+ expedition to Do Kubernetes. It's not having to secure some fricking VPC with a million internal components and plugins/SaaS, it's not letting some cloud provider own your soul, and not locking you in to something so expensive you have to hire an entire internal team of Kubernetes-guys to set it up. All the value in the software industry comes from the actual applications people are paying for. So the better you can let people do that without infrastructure getting in the way, the better. Making developers deal with this bullshit (or deciding to have 10-30% of your developers deal with it fulltime) is what gets in the way: https://kubernetes.io/docs/concepts/overview/components/ | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||