▲ | madjam002 8 hours ago | |
I've been running apps on Kubernetes clusters for the past 6 years and the only thing that really comes to mind that was a breaking change was when the ingress class resource type was introduced. Everything else has been incremental. Maybe I'm forgetting something. What's wrong with recommending a managed cluster? I wouldn't use one but it is certainly an option for teams that don't want to spin up a cluster from scratch, although it comes with its own set of tradeoffs. My project at the moment is definitely easier thanks to Kubernetes as pods are spun up dynamically and I've migrated to a different cloud provider and since migrated to a mix of dedicated servers and autoscaled VMs, all of which was easy due to the common deployment target rather than building on top of a cloud provider specific service. | ||
▲ | p_l 4 hours ago | parent [-] | |
There was breaking change around 1.18, which was spread over few releases to make migration easier. Similar fix pattern as with graduating beta to stable APIs for things like Ingress, they just IIRC covered all the core APIs or so? Don't have time to look it up right now. Generally the only issue was forgetting to update whatever you use to setup the resources, because apiserver auto-updated the formats to the point worst case you could just grab them with kubectl get ... -o yaml/json and trim the read-only fields. |