| ▲ | cyberpunk 6 days ago |
| i really don’t know where this complexity thing comes from anymore. maybe back in the day where a k8s cluster was a 2 hour kubespray run or something but it’s now a single yaml file and a ssh key if you use something like rke. |
|
| ▲ | hombre_fatal 6 days ago | parent | next [-] |
| You are so used to the idiosyncrasies of k8s that you are probably blind to them. And you are probably so experienced with the k8s stack that you can easily debug issues so you discount them. Not long ago, I was using Google Kubernetes Engine when DNS started failing inside the k8s cluster on a routine deploy that didn't touch the k8s config. I hacked on it for quite some time before I gave up and decided to start a whole new cluster. At which point I decided to migrate to Linode if I was going to go through the trouble. It was pretty sobering. Kubernetes has many moving parts that move inside your part of the stack. That's one of the things that makes it complex compared to things like Heroku or Google Cloud Run where the moving parts run in the provider's side of the stack. It's also complex because it does a lot compared to pushing a container somewhere. You might be used to it, but that doesn't mean it's not complex. |
|
| ▲ | esseph 6 days ago | parent | prev | next [-] |
| Running large deployments on bare metal and managing the software and firmware lifecycle still has significant complexity. Modern tooling makes things much better - but it's not "easy". The kubernetes iceberg is 3+ years old but still fairly accurate. https://www.reddit.com/r/kubernetes/comments/u9b95u/kubernet... |
|
| ▲ | vanillax 6 days ago | parent | prev | next [-] |
| I was gonna echo this. K8s is rather easy to setup. Certificates, domains, CICD ( flux/argo ) is where some completely comes in.. If anyone wants to learn more I do have a video I think is the most straight forward yet productionalized capable setup for hosting at home. |
| |
|
| ▲ | xp84 6 days ago | parent | prev | next [-] |
| A few years ago, I set up a $40 k8s "cluster" which consisted of a couple of nodes, at DigitalOcean, and I set it up using this tutorial: https://www.digitalocean.com/community/tutorials/how-to-auto... I was able to create a new service and deploy it with a couple of simple, ~8-line ymls and the cluster takes care of setting up DNS on a subdomain of my main domain, wiring up Lets Encrypt, and deploying the container. Deploying the latest version of my built container image was one kubectl command. I loved it. |
|
| ▲ | notnmeyer 6 days ago | parent | prev [-] |
| i assume when people are talking about k8s complexity, it’s either more complicated scenarios, or they’re not talking about managed k8s. even then though, it’s more that complex needs are complex and not so much that k8s is the thing driving the complexity. if your primary complexity is k8s you either are doing it wrong or chose the wrong tool. |
| |
| ▲ | stego-tech 6 days ago | parent [-] | | > or they’re not talking about managed k8s Bingo! Managed K8s on a hyperscaler is easy mode, and a godsend. I’m speaking from the cluster admin and bare metal perspectives, where it’s a frustrating exercise in micromanaging all these additional abstraction layers just to get the basic “managed” K8s functions in a reliable state. If you’re using managed K8s, then don’t @ me about “It’S nOt CoMpLeX” because we’re not even in the same book, let alone the same chapter. Hypervisors can deploy to bare metal and shared storage without much in the way of additional configuration, but K8s requires defining PVs, storage classes, network layers, local DNS, local firewalls and routers, etc, most of which it does not want to play nicely with pre-1.20 out of the box. It’s gotten better these past two years for sure, but it’s still not as plug-and-play as something like ESXi+vSphere/RHEL+Cockpit/PVE, and that’s a damn shame. Hence why I’m always eager to drive something like Canine! (EDIT: and unless you absolutely have a reason to do bare metal self-hosted K8s from binaries you should absolutely be on a managed K8s cluster provider of some sort. Seriously, the headaches aren’t worth the cost savings for any org of size) | | |
| ▲ | esseph 6 days ago | parent [-] | | I agree with all of this except for your bottom edit. Nutanix and others are helping a lot in this area. Also really like Talos and hope they keep growing. | | |
| ▲ | stego-tech 6 days ago | parent | next [-] | | That’s fair! Nutanix impressed me as well when I was doing a hypervisor deep dive in 2022/2023, but I had concerns about their (lack of) profitability in the long run. VMware Tanzu wasn’t bad either, but was more of an arm-pull than AWS was for K8s. Talos is on my “to review” list, especially with their community license that let’s you manage small deployments like a proper Enterprise might (great evangelism idea, there), but moving everything to kube-virt was a nonstarter in the org at the time. K8s’ ecosystem is improving by the day, but I’m still leaning towards a managed K8s cluster from a cloud provider for most production workloads, as it really is just a few lines of YAML to bootstrap new clusters with automated backups and secrets management nowadays - if you don’t mind the eye-watering bill that comes every month for said convenience. | | |
| ▲ | esseph 6 days ago | parent [-] | | If you work for any of the CISA-Identified 16 critical infrastructure sectors, one of their recommendations is for organizations to be able to expect to operate for more than 24h without an Internet connection. Kinda hard to control real-world things with no Internet connection that rely on an internet connection Note: Nutanix made some interesting k8s-related acquisitions in the last few years. If interested, you should take a look at some of the things they are working on. | | |
| ▲ | stego-tech 6 days ago | parent [-] | | If I were still in that role, I’d absolutely be keeping my Nutanix rep warm for a possible migration. Alas, I’m in another org building them a Win11 imaging pipeline for the time being, and Nutanix doesn’t want to play nice with my personal N100 NUCs for me to try their Community Edition. |
|
| |
| ▲ | nabeards 6 days ago | parent | prev [-] | | Exactly the same as you said. Nobody rents GPUs as cheap as I can get them for LLM work in-cluster. |
|
|
|