▲ | signal11 15 hours ago | |||||||||||||||||||||||||
> If you answer yes to many of those questions there's really no better alternative than k8s. This is not even close to true with even a small number of resources. The notion that k8s somehow is the only choice is right along the lines of “Java Enterprise Edition is the only choice” — ie a real failure of the imagination. For startups and teams with limited resources, DO, fly.io and render are doing lots of interesting work. But what if you can’t use them? Is k8s your only choice? Let’s say you’re a large orgs with good engineering leadership, and you have high-revenue systems where downtime isn’t okay. Also for compliance reasons public cloud isn’t okay. DNS in a tightly controlled large enterprise internal network can be handled with relatively simple microservices. Your org will likely have something already though. Dev/Stage/Production: if you can spin up instances on demand this is trivial. Also financial services and other regulated biz have been doing this for eons before k8s. Load Balancers: lots of non-k8s options exist (software and hardware appliances). Prometheus / Grafana (and things like Netdata) work very well even without k8s. Load Balancing and Ingress is definitely the most interesting piece of the puzzle. Some choose nginx or Envoy, but there’s also teams that use their own ingress solution (sometimes open-sourced!) But why would a team do this? Or more appropriately, why would their management spend on this? Answer: many don’t! But for those that do — the driver is usually cost*, availability and accountability, along with engineering capability as a secondary driver. (*cost because it’s easy to set up a mixed ability team with experienced, mid-career and new engineers for this. You don’t need a team full of kernel hackers.) It costs less than you think, it creates real accountability throughout the stack and most importantly you’ve now got a team of engineers who can rise to any reasonable challenge, and who can be cross pollinated throughout the org. In brief the goal is to have engineers not “k8s implementers” or “OpenShift implementers” or “Cloud Foundry implementers”. | ||||||||||||||||||||||||||
▲ | lmm 10 hours ago | parent [-] | |||||||||||||||||||||||||
> DNS in a tightly controlled large enterprise internal network can be handled with relatively simple microservices. Your org will likely have something already though. And it will likely be buggy with all sorts of edge cases. > Dev/Stage/Production: if you can spin up instances on demand this is trivial. Also financial services and other regulated biz have been doing this for eons before k8s. In my experience financial services have been notably not doing it. > Load Balancers: lots of non-k8s options exist (software and hardware appliances). The problem isn't running a load balancer with a given configuration at a given point in time. It's how you manage the required changes to load balancers and configuration as time goes on. It's very common for that to be a pile of perl scripts that add up to an ad-hoc informally specified bug-ridden implementation of half of kubernetes. | ||||||||||||||||||||||||||
|