Remix.run Logo
kortilla 3 months ago

All of these anecdotes seem to come from people who don’t bother to try to learn kubernetes.

> YAML files, and then spend a day fixing them by copy-pasting increasingly-convoluted things on stackexchange.

This is terrible behavior. Its not any different from yanking out pam modules because you’re getting SSH auth failures caused by a bad permission on an SSH key.

> If I get to tens of millions of users, maybe I’ll worry about it then.

K8s isn’t there for 10s of millions of users. It’s there so you’re not dependent on some bespoke VM state. It also allows you to do code review on infra changes like port numbers being exposed, etc.

Separately, your VM likely isn’t coming from any standard build pipeline so now a vulnerability patch is a login to the machine and an update, which hopefully leaves it in the same state as VMs created new…

Oh, and assuming you don’t want to take downtime on every update, you’ll want a few replicas and load balancing across them (or active/passive HA at a minimum). Good luck representing that as reviewable code as well if you are running VMs.

The people that don’t understand the value prop of infra as code orchestration systems like k8s tend to work in environments where “maintenance downtime” is acceptable and there are only one or two people that actually adjust the configurations.

secondcoming 3 months ago | parent | next [-]

Just because you're using VMs doesn't mean you're now dealing with state.

It's 100% possible to have stateless VMs running in an auto-scaling instance group (in GCP speak, I forget what AWS calls them)

kortilla 3 months ago | parent | next [-]

Once you have the tools to manage all of that, you effectively have kubernetes. Container vs VM is largely irrelevant to what the op is complaining about when it comes to k8s.

People that don’t like k8s tend to be fine with docker. It’s usually that they don’t like declarative state or thinking in selectors and other abstractions.

pdimitar 3 months ago | parent [-]

Quite the contrary, I support declarative configuration and code-reviewable infrastructure changes but k8s is just too much for me.

I paired with one of our platform engineers several months ago. For a simple app that listens on Kafka, stores stuff in PostgreSQL and only has one exposed port... and that needed at least 8 YAML files. Ingress, service ports and whatever other things k8s feels should be described. I forgot almost all of them the same day.

I don't doubt that doing it every day will have me get used to it and even find it intuitive, I suppose. But it's absolutely not coming natural to me.

I'd vastly prefer just a single config block with a declarative DSL in it, a la nginx or Caddy, and describe all these service artifacts in one place. (Or similar to a systemd service file.)

Too many files. Fold stuff in much less of them and I'll probably become an enthusiastic k8s supporter.

everfrustrated 3 months ago | parent | prev [-]

In the beginning AWS didn't even support state on their VMs! All VMs were ephemeral with no state persistence when terminated. They later introduced EBS to allow for the more classic enterprise IT use cases.

tombert 3 months ago | parent | prev | next [-]

Sure, because Kubernetes is convoluted and not fun and is stupidly bureaucratic. I might learn to enjoy being kicked in the balls if I practiced enough but after the first time I don't think I'd like to continue.

> This is terrible behavior. Its not any different from yanking out pam modules because you’re getting SSH auth failures caused by a bad permission on an SSH key.

Sure, I agree, maybe they should make the entire process less awful then and easier to understand. If they're providing a framework to do distributed systems "correctly" then don't make it easy for someone whose heart really isn't into it to screw it up.

> K8s isn’t there for 10s of millions of users. It’s there so you’re not dependent on some bespoke VM state. It also allows you to do code review on infra changes like port numbers being exposed, etc.

That's true of basically any container stuff or orchestration stuff, but sure.

Kubernetes just screams to me as suffering from a "tool to make it look like I'm doing a lot of work". I have similar complaints with pretty much all Java before Java ~17 or so.

I'm not convinced that something like k8s has to be as complicated as it is.

kortilla 3 months ago | parent [-]

> Sure, because Kubernetes is convoluted and not fun and is stupidly bureaucratic.

Describe what you think bureaucratic means in a tool.

> I might learn to enjoy being kicked in the balls if I practiced enough

This is the same thing people say who don’t want to learn command line tools “because they aren’t intuitive enough”. It’s a low brow dismissal holding you back.

tombert 3 months ago | parent [-]

When I say “bureaucratic”, I mean having to edit multiple files for something that doesn’t seem like it should be very complicated.

xorcist 3 months ago | parent | prev [-]

> It’s there so you’re not dependent on some bespoke VM state. It also allows you to do code review on infra changes like port numbers being exposed

That's simply not true.

Every Kubernetes cluster I have seen and used gives a lot more leeway for the runtime state to change than a basic Ansible/Salt/Puppet configuration, just due to the sheer number of components involved. Everything from Terraform to Istio and ArgoCD are all changed in their own little unique way with their unique possibilities for state changes.

Following GitOps in the Kubernetes ecosystem is something that requires discipline.

> environments where “maintenance downtime” is acceptable and there are only one or two people that actually adjust the configurations

Yes, because before Kubernetes that was how all IT was done? A complete clown show, amirite?