| ▲ | paxys 15 hours ago |
| > Kubernetes comes with substantial infrastructure costs that go beyond DevOps and management time. The high cost arises from needing to provision a bare-bones cluster with redundant management nodes. That's your problem right there. You really don't want to be setting up and managing a cluster from scratch for anything less than a datacenter-scale operation. If you are already on a cloud provider just use their managed Kubernetes offering instead. It will come with a free control plane and abstract away most of the painful parts for you (like etcd, networking, load balancing, ACLs, node provisioning, kubelets, proxies). That way you just bring your own nodes/VMs and can still enjoy the deployment standardization and other powerful features without the operational burden. |
|
| ▲ | dikei 14 hours ago | parent | next [-] |
| Even for on-prem scenario, I'd rather maintain a K8S control plane and let developer teams manage their own apps deployment in their own little namespace, than provisioning a bunch of new VMs each time a team need some services deployed. |
| |
| ▲ | mzhaase 8 hours ago | parent | next [-] | | This for me is THE reason for using container management. Without containers, you end up with hundreds of VMs. Then, when the time comes that you have to upgrade to a new OS, you have to go through the dance, for every service: - set up new VMs - deploy software on new VMs - have the team responsible give their ok It takes forever, and in my experience, often never completes because some snowflake exists somewhere, or something needs a lib that doesn't exist on the new OS. VMs decouple the OS from the hardware, but you should still decouple the service from the OS. So that means containers. But then managing hundreds of containers still sucks. With container management, I just - add x new nodes to cluster - drain x old nodes and delete them | |
| ▲ | rtpg 13 hours ago | parent | prev | next [-] | | Even as a K8s hater, this is a pretty salient point. If you are serious about minimizing ops work, you can make sure people are deploying things in very simple ways, and in that world you are looking at _very easy_ deployment strategies relative to having to wire up VMs over and over again. Just feels like lots of devs will take whatever random configs they find online and throw them over the fence, so now you just have a big tangled mess for your CRUD app. | | |
| ▲ | guitarbill 13 hours ago | parent | next [-] | | > Just feels like lots of devs will take whatever random configs they find online Well it usually isn't a mystery. Requiring a developer team to learn k8s likely with no resources, time, or help is not a recipe for success. You might have minimised someone else's ops work, but at what cost? | | |
| ▲ | rtpg 13 hours ago | parent [-] | | I am partly sympathetic to that (and am a person who does this) but I think too many devs are very nihilistic and use this as an excuse to stop thinking. Everyone in a company is busy doing stuff! There's a lot of nuance here. I think ops teams are comfortable with what I consider "config spaghetti". Some companies are incentivised to ship stuff that's hard to configure manually. And a lot of other dynamics are involved. But at the end of the day if a dev copy-pastes some config into a file, taking a quick look over and asking yourself "how much of this can I actually remove?" is a valuable skill. Really you want the ops team to be absorbing this as well, but this is where constant atomization of teams makes things worse! Extra coordination costs + a loss of a holistic view of the system means that the iteration cycles become too high. But there are plenty of things where (especially if you are the one integrating something!) you should be able to look over a thing and see, like, an if statement that will always be false for your case and just remove it. So many modern ops tools are garbage and don't accept the idea of running something on your machine, but an if statement is an if statement is an if statement. |
| |
| ▲ | dikei 13 hours ago | parent | prev [-] | | > Just feels like lots of devs will take whatever random configs they find online and throw them over the fence, so now you just have a big tangled mess for your CRUD app. Agree. To reduce the chance a dev pull some random configs out of nowhere, we maintain a Helm template that can be used to deploy almost all of our services in a sane way, just replace the container image and ports. The deployment is probably not optimal, but further tuning can be done after the service is up and we have gathered enough metrics. We've also put all our configs in one place, since we found that devs tend to copy from existing configs in the repo before searching the internet. |
| |
| ▲ | spockz 14 hours ago | parent | prev | next [-] | | I can imagine. Do you have complete automation setup around maintaining the cluster? We are now on-prem using “pet” clusters with namespace as a service automated on it. This causes all kinds of issues with different workloads with different performance characteristics and requirements. They also share ingress and egress nodes so impact on those has a large blast radius. This leads to more rules and requirements. Having dedicated and managed clusters where everyone can determine their sizing and granularity of workloads to deploy to which cluster is paradise compared to that. | | |
| ▲ | solatic 13 hours ago | parent [-] | | > This causes all kinds of issues with different workloads with different performance characteristics and requirements. Most of these issues can be fixed by setting resource requests equal to limits and using integer CPU values to guarantee QoS. You should also have an interface with developers explaining which nodes in your datacenter have which characteristics, using node labels and taints, and force developers to pick specific node groups as such by specifying node affinity and tolerations, by not bringing online nodes without taints. > They also share ingress and egress nodes so impact on those has a large blast radius. This is true regardless of whether or not you use Kubernetes. |
| |
| ▲ | DanielHB 10 hours ago | parent | prev [-] | | > than provisioning a bunch of new VMs each time a team need some services deployed. Back in the old days before cloud providers this was the only option. I started my career in early 2010s and got the tailend of this, it was not fun. I remember my IT department refusing to set up git for us (we were using SVN before) so we just asked a VM and set up a git repo in there ourselves to host our code. |
|
|
| ▲ | jillesvangurp 9 hours ago | parent | prev | next [-] |
| For most small setups, the cost of running an empty kubernetes cluster (managed) is typically higher than setting up a db, a couple of vms and a loadbalancer, which goes a long way for running a simple service. Add some buckets, a CDN and you are pretty much good to go. If you need dedicated people just to stay on top of running your services, you have a problem that's costing you hundreds of thousands per year. There's a lot of fun and easy stuff you can do with that kind of money. This is a pattern I see with a lot of teams that get sucked into using Kubernetes, micro services, terraform, etc. Once you need a few people just to stay on top of the complexity that comes from that, you are already spending a lot. I tend to keep things simple on my own projects because any amount of time I spend on that, I'm not spending on more valuable work like adding features, fixing bugs, etc. Of course it's not black and white and there's always a trade off between over and under engineering. But a lot of teams default to over engineering simply by using Kubernetes from day one. You don't actually need to. There's nothing wrong with a monolith running on two simple vms with a load balancer in front of it. Worked fine twenty years ago and it is still perfectly valid. And it's dead easy to setup and manage in most popular cloud environments. If you use some kind of scaling group, it will scale just fine. |
| |
| ▲ | dikei 8 hours ago | parent [-] | | > For most small setups, the cost of running an empty kubernetes cluster (managed) is typically higher than setting up a db, a couple of vms and a loadbalancer, which goes a long way for running a simple service. Not really, the cost of an empty EKS cluster is the management fee of $0.1/hour, or roughly the price of a small EC2 instance. | | |
| ▲ | jillesvangurp 7 hours ago | parent [-] | | 0.1 * 24 * 30 = 720$/month That's about 2x our monthly cloud expenses. That's not a small VM. You can buy a mac mini for that. | | |
| ▲ | dikei 7 hours ago | parent [-] | | $72 Though if you are only spending $350 monthly on VM, Database and Load Balancer, you can probably count resource instances by hand, and don't need a K8S cluster yet. |
|
|
|
|
| ▲ | sbstp 12 hours ago | parent | prev | next [-] |
| Most control planes are not free anymore, they cost like 70$/mo on AWS & GCP. Used to be a while back. |
| |
| ▲ | dikei 8 hours ago | parent | next [-] | | GCP has $74 free credit for Zonal cluster, so you effectively have the first cluster for free. And even $70 is cheap, considering that a cluster should be shared by all the services from all the teams in the same environment, bar very few exceptions. | |
| ▲ | szszrk 10 hours ago | parent | prev [-] | | That's around the cost of a single VM (cheapest 8GB ram I found quickly). Azure has a free tier with control plane completely free (but no SLA) - great deal for test clusters and testing infra. If you are that worried about costs, then public cloud may not be for you at all, or you should look at ECS/App containers or serverless. |
|
|
| ▲ | oofbey 12 hours ago | parent | prev [-] |
| If you do find yourself wanting to create a cluster by hand, it's probably because you don't actually need lots of machines in the "cluster". In my experience it's super handy to run tests on a single-node "cluster", and then k3s is super simple. It takes something like 8 seconds to install k3s on a bare CI/CD instance, and then you can install your YAML and see that it works. Once you're used to it, the high-level abstractions of k8 are wonderful. I run k3s on raspberry pi's because it takes care of all sorts of stuff for you, and it's easy to port code and design patterns from the big backend service to a little home project. |