Remix.run Logo
threeseed 15 hours ago

Kubernetes has a proportional learning curve.

If you're used to managing platforms e.g. networking, load balancers, security etc. then it's intuitive and easy.

If you're used to everything being managed for you then it will feel steep.

t-writescode 13 hours ago | parent | next [-]

I think this is only true if the original k8s cluster you're operating against was written by an expert and laid out as such.

If you're entering into k8s land with someone else's very complicated mess across hundreds of files, you're going to be in for a bad time.

A big problem, I feel, is that if you don't have an expert design the k8s system from the start, it's just going to be a horrible time; and, many people, when they're asked to set up a k8s setup for their startup or whatever, aren't already experts, so the thing that's produced is not maintainable.

And then everyone is cursed.

p_l 4 hours ago | parent | next [-]

Thanks to kubernetes "flattening" that mess into somewhat flat object map (I like to call it a blackboard system :D) it can be reasonably easy to figure out what's the desired state and what's the current state for given cluster, even if the files are a mess.

However...

Talking with people who started using kubernetes later than me[1], it seems like a lot of confusion starts by trying to start with somewhat complete example like using a Deployment + Ingress + Services to deploy, well, a typical web application. The stuff that would be trivial to run in typical PaaS.

The problem is that then you do not know what a lot of those magic incantations mean, and the actually very, very simple mechanism of how things work in k8s are lost, and you can't find your way in a running cluster.

[1] I started learning around 1.0, went with dev deployment with 1.3, graduated it to prod with 1.4. Smooth sailing since[2]

[2] The worst issues since involved dealing with what was actually global GCP networking outage that we were extra sensitive to due to extensive DNS use in kubernetes, and once naively assuming that the people before me set sensible sizes for various nodes, only to find a combination of too small to live EC2 instances choking till control plane death, and outdated etcd (because the rest of the company twas too conservative in updating) getting into rare but possible bug that corrupted data which was triggered by the flapping caused by too small instances. Neither I count as k8s issue, would have killed anything else I could setup given the same constraints.

threeseed 10 hours ago | parent | prev [-]

The exact can be said for your Terraform, Pulumi, Shell scripts etc. Not to mention unique config for every component and piece of infrastructure.

At least Kubernetes is all YAML, consistent and can be tested locally.

t-writescode 7 hours ago | parent [-]

My experience either k8s is with terraform building the k8s environment for me :D

alienchow 14 hours ago | parent | prev | next [-]

That's pretty much it. I think the main issue nowadays is that companies think full stack engineering means OG(FE BE DB) + CICD + Infra + security compliance + SRE.

If a team of 5-10 SWEs have to do all of that while only graded on feature releases, k8s would massively suck.

I also agree that experienced platform/infra engineers tend to whine less about k8s.

ikiris 13 hours ago | parent | prev [-]

Nah the difference between managing k8 and the system it was based on is VASTLY different. K8 is much harder than it needs to be because there wasn't tooling for a long time to manage it well. Going from google internal to K8 is incredibly painful.