Remix.run Logo
zdw 4 days ago

IMO, Kubernetes isn't inevitable, and this seems to paint it as such.

K8s is well suited to dynamically scaling a SaaS product delivered over the web. When you get outside this scenario - for example, on-prem or single node "clusters" that are running K8s just for API compatibility, it seems like either overkill or a bad choice. Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.

There are also folks who understand the innards of K8s very well that have legitimate criticisms of it - for example, this one from the MetalLB developer: https://blog.dave.tf/post/new-kubernetes/

Before you deploy something, actually understand what the pros/cons are, and what problem it was made to solve, and if your problem isn't at least mostly a match, keep looking.

zbentley 4 days ago | parent | next [-]

> K8s is well suited to dynamically scaling a SaaS product delivered over the web

It’s well suited to other things as well, people are just in denial about some of them.

“I need to run more than two containers and have a googleable way to manage their behavior” is a very common need.

capitalhilbilly 2 days ago | parent [-]

This is a need it fails at miserably. k8s reminds me of the raid recentralization anti pattern problem where you fix a hardware failure that never occurs in exchange for knowing simple higher level mistakes or security problems will tank something now too large to fail again.

antonvs 4 days ago | parent | prev | next [-]

Kubernetes, in the form of k3s, was a critical success factor for us with the onprem deployment of our SaaS product.

What's the problem with a single-node cluster? We use that for e.g. dev environments, as well as some small onprem deployments.

> Even when cloud deployed, K8s mostly functions as a batteries-not-included wrapper around the underlying cloud provider services and APIs.

Which batteries are not included? The "wrapper around the underlying cloud provider services and APIs" is enormously important. Why would you prefer to use a less well-designed, more vendor-specific set of APIs?

I seriously don't get these criticisms of k8s. K8s abstracts away, and standardizes, an enormous amount of system complexity. The people who object to it just don't have the requirements where it starts making sense, that's all.

subhobroto 4 days ago | parent [-]

> Kubernetes, in the form of k3s, was a critical success factor for us with the onprem deployment of our SaaS product.

What surprises and gotchas did you have to deal with using k3s as a Kubernetes implementation?

Did you use an LB? Which one? I'm assuming all your onprem nodes were just linux servers with very basic equipment (the fanciest networking equipment you used were 10GbE PCIe cards, nothing more special than that?)

antonvs 4 days ago | parent [-]

We sell to enterprise customers. All of them deploy our solution on internal cloud-style VM clusters. We use the Traefik ingress controller by default.

There really weren't any particular surprises or gotchas at that level.

In this context, I've never had to deal with anything at the level of the type of Ethernet card. That's kind of the point: platforms like k8s abstract away from that.

physicsguy 4 days ago | parent | prev [-]

It's also difficult for data pipelines or data intensive things. At several companies we've run into the "Need to put ML model behind API and pods get killed because health checks via API are basically not compatible with container fully under load but still working"