Remix.run Logo
madjam002 11 hours ago

I don't get these recent anti-Kubernetes posts, yes if you're deploying a simple app then there are alternatives which are easier, but as your app starts to get more complex then suddenly you'll be wishing you had the Kubernetes API.

I'd use Kubernetes even if I was spinning up a single VM and installing k3s on it. It's a universal deployment target.

Spinning up a cluster isn't the easiest thing, but I don't understand how a lot of the complaints around this come from sysadmin-type people who replace Kubernetes with spinning up VMs instead. The main complexity I've found from managing a cluster is the boring sysadmin stuff, PKI, firewall rules, scripts to automate. Kubernetes itself is pretty rock solid. A lot of cluster failure modes still result in your app still working. Etcd can be a pain but if you want you can even replace that with a SQL database for moderate sized clusters if that's more your thing (easier to manage HA control plane then too if you've already got a HA SQL server).

Or yes just use a managed Kubernetes cluster.

noname44 10 hours ago | parent [-]

lol, even if you have complex apps there are always easier solutions than Kubernetes. It is evident that you have never run such an app and are just talking about it. Otherwise, you would know the issues you would encounter with every update due to breaking changes. Not to mention that you need a high level of expertise and a dedicated team, which costs far more than running an app on Fargate. Recommending a managed Kubernetes cluster is nonsense, as it goes against the whole purpose of Kubernetes itself.

madjam002 8 hours ago | parent | next [-]

I've been running apps on Kubernetes clusters for the past 6 years and the only thing that really comes to mind that was a breaking change was when the ingress class resource type was introduced. Everything else has been incremental. Maybe I'm forgetting something.

What's wrong with recommending a managed cluster? I wouldn't use one but it is certainly an option for teams that don't want to spin up a cluster from scratch, although it comes with its own set of tradeoffs.

My project at the moment is definitely easier thanks to Kubernetes as pods are spun up dynamically and I've migrated to a different cloud provider and since migrated to a mix of dedicated servers and autoscaled VMs, all of which was easy due to the common deployment target rather than building on top of a cloud provider specific service.

p_l 4 hours ago | parent [-]

There was breaking change around 1.18, which was spread over few releases to make migration easier. Similar fix pattern as with graduating beta to stable APIs for things like Ingress, they just IIRC covered all the core APIs or so? Don't have time to look it up right now.

Generally the only issue was forgetting to update whatever you use to setup the resources, because apiserver auto-updated the formats to the point worst case you could just grab them with kubectl get ... -o yaml/json and trim the read-only fields.

mplewis 2 hours ago | parent | prev [-]

This is obvious FUD from a throwaway account. 1.x Kubernetes breakage has rarely affected me. I’m a team of one and k8s has added a lot of value by allowing me to build and run reliable, auto-managed applications that I’m confident I can transfer across clouds.