▲ | lakomen 3 months ago | |
Thanks, yes rent 3 small (well 4 dedicated cores, 8GB RAM, 256GB storage) nodes for CP and double the size for workers, which should be enough for a test cluster, well test and actual real life prod cluster, external of course. Costs around 80€/m. My current server costs are about the same price. I noticed when the master or 1st cp node goes down, the other 2 are not sufficient to keep the cluster running. I wonder why do I then even have (and am paying for) 3 CP nodes when failover isn't working. I had the most success with microk8s. I tried many other solutions, and I'm not going down the manual kubeadm path. I can't dedicate all my time into maintaining that cluster. I want to focus on developing my services, writing a configuration file, or rather with k8s that's a collection of files... Everything screams "don't do it". "It's too complicated, too time intensive, don't do it, you will regret it" I'm not even sure k8s will be able to recover the loss of 1 worker node. All my experiments with a 3 worker postgres cluster showed that if the node count is less than 3, it goes into an endless loop of trying to bring the instances up. I mean the most important thing is, how good is the solution when disaster happens? K8s tries to reallocate the resources on other nodes. But if there are less nodes than required by the helmet chart, that fails. So that means in conclusion, I need a fallback node, aka 4 worker nodes, and the cluster now costs about 97€/m. But that also means that I can't say with certainty which IPs are being used, if I scale the service up. I'm trying to be cost effective, my resources are limited. And I'd like to learn about this, but a course costs over 10k€ and takes 6 months. I also noticed, when I have 3 CP nodes, only the 1st node has high CPU and RAM usage, the other 2 are pretty much dead, except for the 10% resource usage of etcd, no matter if they're doing anything or not. Stateless workloads are a myth. Or rather you can't provide a service completely without state. The units can of course be stateless, but they will access stateful containers, like postgres, which also run on k8s. I don't think I need k8s for now and when I will, other solutions are more likely. The whole effort of maintaining and upgrading the cluster, and right now there's a transition to gateways instead of ingress controllers, so you need to bother with infra instead of solving actual problems. Thanks for feedback anyhow. |