Remix.run Logo
ablob 4 days ago

I just feel like "you can do this with Kubernetes" is a slippery slope. "You can do X with Y, so use Y" is a great way to add a dependency, especially if it is "community vetted" already. Sometimes simple is better - you don't need to add anything that implements some of you logic as a dependency to stay DRY or whatever you want to call it.

It really feels like we are drowning in self-imposed tech debt and keep adding layers to try and hold it for just a while longer. Now that being said, there is no reason not to add Kubernetes once a sufficient overlap is achieved.

cortesoft 4 days ago | parent | next [-]

Kubernetes handles so many layers you are going to need for every app, though… deployments, networking, cert management, monitoring, logging, server maintenance, horizontal scaling… this isn’t a slippery slope, it is just what you need.

chillfox 4 days ago | parent [-]

But k8s does not do almost any of those things!

You have to pick and then configure those components, just like you would have had to pick and configure apps doing those things if you were not using k8s, so the only thing k8s actually brings to the table is a common configuration format (yaml).

SOLAR_FIELDS 3 days ago | parent | prev | next [-]

The thing about Kubernetes is its a standardization of deployment. Kubernetes is complicated because deploying software is complicated. You might try to YAGNI hand wave it away, but as the article points out, over time, you end up building Kubernetes anyway

echelon 4 days ago | parent | prev [-]

You can use k8s on $2/mo digital ocean projects. It probably even works on the free tier of a lot of providers.

And there's zero setup. Just a deployment yaml that specifies exactly what you want deployed, which has the benefit of easy version control.

I don't get why people are so bent on hating Kubernetes. The mental cost to deploy a 6-line deployment yaml is less than futzing around with FTP and nginx.

Kube is the new LAMP stack. It's easier too. And portable.

If you're talking managed kube vs one you're taking the responsibility of self-managing, sure. But that's no different than self-managing your stack in the old world. Suddenly you have to become Sysadmin/SRE.

throwawaypath 4 days ago | parent | next [-]

>And portable.

This made me audibly guffaw. Kubernetes is a lot of things, but "portable" is not one of them. GKE, EKS, AKS, OCP, etc., portability between them is nowhere near guaranteed.

cortesoft 4 days ago | parent [-]

It is if you stick to standard Kubernetes resources, and it has gotten even easier with better storage class and load balancer support. All of the cloud providers now give you default storage classes and ingresses when you provision a cluster on them, so you can use the exact same deployment on any of them an automatically get those things provisioned in the right way out of the box.

throwawaypath 4 days ago | parent | next [-]

>It is if you stick to standard Kubernetes resources

"If you stick to standard C..."

No one does, that's the issue. Helm charts that only support certain cloud providers, operators and annotations that end up being platform specific, etc.

>now give you default storage classes and ingresses

Ingress is being deprecated, it's Gateway now! Welcome to hell, er, Kubernetes.

vbezhenar 4 days ago | parent | next [-]

> Ingress is being deprecated

Do you have any links about Ingress being deprecated?

Official docs here: https://kubernetes.io/docs/reference/kubernetes-api/service-...

There are no mentions about this API being deprecated.

0x457 4 days ago | parent | next [-]

Ingress resource is bassicaly "implementation specific" and isn't portable. It's not deprecated now, but there plans to retire ingress-nginx: https://kubernetes.io/blog/2025/11/11/ingress-nginx-retireme...

Anyway, Ingress resource been in "Migrate to Gateway" state for awhile.

jurgenburgen 4 days ago | parent [-]

> but there plans to retire ingress-nginx

To clarify, it’s already retired and the repo has been archived since 24th of March.

0x457 3 days ago | parent [-]

Mentally, I'm still in January. For some reason the fact that March 24th, 2026 already passed didn't click in my head.

_whiteCaps_ 4 days ago | parent | prev [-]

https://kubernetes.io/blog/2025/11/11/ingress-nginx-retireme...

NGINX Ingress is deprecated, not the Ingress resource itself.

chuckadams 4 days ago | parent | prev | next [-]

Ingress is frozen, not deprecated. Gateway does more, but Ingress isn’t going anywhere. It’s a stable API, which is the opposite of churn.

physicsguy 4 days ago | parent | next [-]

Til there's a security issue, right? Nginx is a big target.

chuckadams 3 days ago | parent [-]

The API of Ingress is not Nginx's API. The spec itself is basically a yaml schema, it's hard to have a vulnerability in that.

physicsguy 2 days ago | parent [-]

There have been critical vulns in nginx-ingress (the part which is deprecated) like this: https://kubernetes.io/blog/2025/03/24/ingress-nginx-cve-2025...

If you're using it after it's dead, you're at risk of further problems of this nature that aren't in the underly nginx reverse proxy but in the code wrapping it.

chuckadams 2 days ago | parent [-]

That's one reason I've always used Traefik as my Ingress (I work mostly with K3S, which uses it by default). Which appears to have had its own security issues too, but it still looks like an implementation issue, not a weakness designed in by the spec.

On EKS I'm using whatever AWS has brewed up to integrate ELB/ALB, but I'll tend to trust it ... though maybe I shouldn't, given all the troubles I have with other integrations like secrets management.

SOLAR_FIELDS 3 days ago | parent | prev [-]

Would love to use Gateway! Every time I spin up a new cluster it goes like this:

- New cluster setup, time to use gateway! Yay!

- Oh crap, like 80% of the helm chart and other existing configurations I need for the softwares I'm trying to deploy STILL doesn't use gateway, this new API that's been out for... like half a decade at least.

- Even core networking things like Istio/Envoy only have limited gateway support compared to ingress

- Sigh. Ingress again.

It's been like this since gateway's inception and every time I check the needle has moved like 2% towards gateway. So I'm looking forward to year 2050 when I can use gateway!

The problem, as CNCF knows, if they pushed Gateway and deprecated ingress the world would revolt due to the amount of work involved to migrate stuff. Therefore, they leave it up to "the people" to do the extra work themselves, who have no incentive to do so since for many usecases it's not materially better.

cortesoft 4 days ago | parent | prev [-]

I use Kubernetes every day, and have worked with dozens of helm charts, and have yet to encounter cloud specific helm charts. Are these internal helm charts for your company?

Obviously you can lock yourself in if you choose, but I have yet to see third party tools that assume a specific provider (unless you are using tools created BY that provider).

At my previous spot, we were running dozens of clusters, with some on prem and some in the cloud. It was easy to move workloads between the two, the only issue was ACLs, but that was our own choice.

I know they are pushing the new gateway api, but ingresses still work just fine.

throwawaypath 2 days ago | parent [-]

Tell me you haven't managed Kubernetes at scale without telling me you haven't managed Kubernetes at scale.

Helm charts may not support a cloud platform like Rancher, Azure, etc. or may have platform specific issues. First one I checked: https://docs.jfrog.com/installation/docs/helm-chart-requirem...

"When deploying a JFrog application on an AWS EKS cluster, the AWS EBS CSI Driver is required for dynamic volume provisioning. However, this driver is not included in the JFrog Helm Charts."

"JFrog validates compatibility with core Kubernetes distributions. Some Kubernetes vendors apply additional logic or hardening (for example, Rancher), so JFrog Platform deployment on those vendor-specific distributions might not be fully supported."

SOLAR_FIELDS 3 days ago | parent | prev [-]

I'm a Kubernetes user and advocate but to call it "portable" just tells me you've never actually tried to deploy the similar thing on multiple different clouds. Even the standardized kubernetes resources behave differently due to various cloud idiosyncracies. You can of course make the situation easier, but to call it entirely portable is probably a misnomer.

gzread 4 days ago | parent | prev | next [-]

> And there's zero setup. Just a deployment yaml that specifies exactly what you want deployed

Writing this yaml is hours and hours of setup if you can't ctrl+c/v from your last project

turtlebits 4 days ago | parent [-]

AI is especially good at writing IaC. Most small projects I let it write a Dockerfile too.

subhobroto 4 days ago | parent | prev [-]

> Suddenly you have to become Sysadmin/SRE

I don't you made that argument but could a valid conclusion of your comment be that, because Kubernetes is so ubiquitous, using it frees you from being a Sysadmin/SRE?

0x457 4 days ago | parent [-]

Frees you from being a sysadmin, but burndens you with being a k8s operator, still an SRE.