▲ | t-writescode 13 hours ago | |||||||
I think this is only true if the original k8s cluster you're operating against was written by an expert and laid out as such. If you're entering into k8s land with someone else's very complicated mess across hundreds of files, you're going to be in for a bad time. A big problem, I feel, is that if you don't have an expert design the k8s system from the start, it's just going to be a horrible time; and, many people, when they're asked to set up a k8s setup for their startup or whatever, aren't already experts, so the thing that's produced is not maintainable. And then everyone is cursed. | ||||||||
▲ | p_l 4 hours ago | parent | next [-] | |||||||
Thanks to kubernetes "flattening" that mess into somewhat flat object map (I like to call it a blackboard system :D) it can be reasonably easy to figure out what's the desired state and what's the current state for given cluster, even if the files are a mess. However... Talking with people who started using kubernetes later than me[1], it seems like a lot of confusion starts by trying to start with somewhat complete example like using a Deployment + Ingress + Services to deploy, well, a typical web application. The stuff that would be trivial to run in typical PaaS. The problem is that then you do not know what a lot of those magic incantations mean, and the actually very, very simple mechanism of how things work in k8s are lost, and you can't find your way in a running cluster. [1] I started learning around 1.0, went with dev deployment with 1.3, graduated it to prod with 1.4. Smooth sailing since[2] [2] The worst issues since involved dealing with what was actually global GCP networking outage that we were extra sensitive to due to extensive DNS use in kubernetes, and once naively assuming that the people before me set sensible sizes for various nodes, only to find a combination of too small to live EC2 instances choking till control plane death, and outdated etcd (because the rest of the company twas too conservative in updating) getting into rare but possible bug that corrupted data which was triggered by the flapping caused by too small instances. Neither I count as k8s issue, would have killed anything else I could setup given the same constraints. | ||||||||
▲ | threeseed 10 hours ago | parent | prev [-] | |||||||
The exact can be said for your Terraform, Pulumi, Shell scripts etc. Not to mention unique config for every component and piece of infrastructure. At least Kubernetes is all YAML, consistent and can be tested locally. | ||||||||
|