| ▲ | cortesoft 4 days ago |
| This is obviously slightly exaggerated, but I do feel like this whenever people dismiss Kubernetes as either too complicated or not needed. The response I always got when suggesting Kubernetes is "you can do all those things without Kubernetes" Sure, of course. There are a million different ways to do everything Kubernetes does, and some of them might be simpler or fit your use case more perfectly. You can make different decisions for each choice Kubernetes makes, and maybe your decisions are more perfect for your workload. However, the big win with Kubernetes is that all of those choices have been made and agreed upon, and now you have an entire ecosystem of tools, expertise, blog posts, AI knowledge, etc, that knows the choices Kubernetes made and can interface with that. This is VERY powerful. |
|
| ▲ | 28304283409234 4 days ago | parent | next [-] |
| Kubernetes is a complicated solution to a complicated problem. A lot of companies have different problems and should look for different solutions. But if you are facing this particular problem, Kubernetes is the way to go. The trick is to understand which problem you are facing. |
| |
| ▲ | analyte123 4 days ago | parent [-] | | Kubernetes can be a sign that are you making things more complicated than they should be, too early. But if you actually have made things complicated enough (whether through essential or accidental complexity) that you have problems that k8s is good at solving, I really hope you have it instead of some hand rolled solution. I feel the same way about commercial APM tools. Obviously in a perfect world, you would have software so simple and fast that they’re unnecessary. Maybe every month or two someone has to grep some logs that are already in place. Once you’ve gotten yourself in situation where this is obviously not true, having Datadog, New Relic or similar set up (or using k8s instead of 100 unversioned shell scripts by someone who doesn’t work there anymore) will make your inevitable distributed microservice snafu get resolved in hours rather than a longer business-risking period. | | |
| ▲ | quadruple 3 days ago | parent [-] | | > But if you actually have made things complicated enough [...] The only problem I see in this case is that complexity doesn't come all at once. By the time you reach a problem that k8s is good at solving, you've probably already accidentally made a k8s alongside your piece of software. In my(quite short) SWE career, I've seen software evolve, even ones with a proper design stage. Maybe I just don't have enough experience to have seen a properly designed project, but I don't know what I don't know after all. |
|
|
|
| ▲ | zmmmmm 4 days ago | parent | prev | next [-] |
| > all of those choices have been made and agreed upon Have they really? I have a few apps deployed on k8s and I feel like every time I need something, it turns out it doesn't do that and I'm into some exotic extension or plugin type ecosystem. Something as simple as service autoscaling (this was a few years ago) was an adventure into DIY. Moving from google cloud to AWS was a complete writeoff almost - just build it again. I'm sure it captures some layer of abstraction that's useful but my personal experience is it seems very thin and elusive. |
| |
| ▲ | chillfox 4 days ago | parent | next [-] | | Yep, this is my main problem with k8s, it really feels like none of the choices have been made, it's all chose and configure components. | | |
| ▲ | hedora 4 days ago | parent [-] | | This, and because of that, claiming your app "runs in kubernetes" is completely meaningless. Concretely: Take your app. With one button click, or apt-get install ??? on all your machines, configure k8s. Now, run your app. The idea that this could work has been laughable for any k8s production environment I've seen, which means you can't do things like write automated tests that inject failures into the etcd control plane, etc. (Yes, I know there are chaos-monkey things, but they can't simulate realistic failures like kernel panics or machine reboots, because that'd impact other tenants of the Kubernetes cluster, which, realistically, is probably single tenant, but I digress..) If your configuration is megabytes of impossible to understand YAML, and is also not portable to other environments, then what's the point? (I understand the point for vendors in the ecosystem: People pay them for things like CNI and CSI, which replace Linux's network + storage primitives with slower, more complicated stuff that has worse fault tolerance semantics. Again, I digress...) | | |
| ▲ | philipallstar 2 days ago | parent [-] | | > If your configuration is megabytes of impossible to understand YAML, and is also not portable to other environments, then what's the point? If almost all your configuration is about getting Kubernetes set up, and not about your application setup inside Kubernetes, there probably isn't a point. But being able to use roughly the same config inside different Kubernetes is quite good. | | |
| ▲ | hedora 2 days ago | parent [-] | | But I've never seen portable kubernetes configs (except for vendor software that probably wouldn't be needed outside of kubernetes). If you just tell kubectl to dump your pod configs, then load them on some other cluster, that definitely won't work. If you use the management software that generated the pod setup somewhere else, that probably won't work either because the somewhere else is going to be missing the CSI and CNI you targeted. Even if those match, it'll be missing the CRDs. God help you if you want to run two programs on one Kubernetes, and there's a CRD versioning conflict in their two dependency sets. |
|
|
| |
| ▲ | esseph 3 days ago | parent | prev | next [-] | | > Moving from google cloud to AWS was a complete writeoff almost - just build it again. Yep. Kubernetes is not just kubernetes when moving between clouds, it becomes a very opinionated product (for better or worse) with lots of vendor addons. Could someone that is familiar with one pick up on the other? Sure! But there are gotchas. And then kubernetes on prem adds the hardware lifecycle piece, and potential data locality issues, etc. | | |
| ▲ | physicles 3 days ago | parent | next [-] | | There are differences across vendors, but there’s a way to build with k8s where the benefit far outweighs the cost. We run a bunch of services in two very different cloud vendors (one of which used to be DIYed with kubeadm), and also on dev machines with k3s. Takes a while to figure this out and to draw the kustomize boundaries in the right place, but once you do, it’s actually really nice. Two things work in our favor: - we’ve been at this for around 8 years, so we didn’t have to deal with all the gotchas at once - we aggressively avoid tech that isn’t universal (so S3 is OK, but SQS or DynamoDB is not; use haproxy instead of ingress controllers; etc) | |
| ▲ | philipallstar 2 days ago | parent | prev [-] | | > Kubernetes is not just kubernetes when moving between clouds, it becomes a very opinionated product (for better or worse) with lots of vendor addons. I think this is gradually getting better. Networking with Gateways is better than with Ingress in this sense. Things like autoscaling groups need to get better, as they are (or were a couple of years ago) very bespoke. |
| |
| ▲ | chaos_emergent 4 days ago | parent | prev | next [-] | | I wouldn’t really call it “DIY” per se, k8s has the resource API and you can create whatever scaling policies you want to with it, but I do see how that’s not obvious when it’s advertised as ‘batteries included’ | |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | baby_souffle 4 days ago | parent | prev | next [-] |
| > However, the big win with Kubernetes is that all of those choices have been made and agreed upon, and now you have an entire ecosystem of tools, expertise, blog posts, AI knowledge, etc, that knows the choices Kubernetes made and can interface with that. This is VERY powerful. Yep! I am now using k8s even for small / 'single purpose' clusters just so I can keep renovate/argo/flux in the loop. Yes, I _could_ wire renovate up to some variables in a salt state or chef cookbook and merge that to `main` and then have the chef agent / salt minion pick up the new version(s) and roll them out gradually... but I don't need to, now! |
|
| ▲ | throwaway041207 4 days ago | parent | prev | next [-] |
| Agree. For years I had developed my own preferred way of deploying Rails apps large and small on VMs: haproxy, nginx, supervisord, ufw, the actual deploy tooling (capistrano and other alternatives) and so on... and if those tools are old or defunct now it's because my knowledge of that world basically halted 8 years ago because I've never had to configure anything but k8s since then. I've used it every day since then so I have the luxury of knowing it well. So the frustrations that the new or casual user may have are not the same for me. |
|
| ▲ | ablob 4 days ago | parent | prev | next [-] |
| I just feel like "you can do this with Kubernetes" is a slippery slope.
"You can do X with Y, so use Y" is a great way to add a dependency, especially if it is "community vetted" already.
Sometimes simple is better - you don't need to add anything that implements some of you logic as a dependency to stay DRY or whatever you want to call it. It really feels like we are drowning in self-imposed tech debt and keep adding layers to try and hold it for just a while longer.
Now that being said, there is no reason not to add Kubernetes once a sufficient overlap is achieved. |
| |
| ▲ | cortesoft 4 days ago | parent | next [-] | | Kubernetes handles so many layers you are going to need for every app, though… deployments, networking, cert management, monitoring, logging, server maintenance, horizontal scaling… this isn’t a slippery slope, it is just what you need. | | |
| ▲ | chillfox 4 days ago | parent [-] | | But k8s does not do almost any of those things! You have to pick and then configure those components, just like you would have had to pick and configure apps doing those things if you were not using k8s, so the only thing k8s actually brings to the table is a common configuration format (yaml). |
| |
| ▲ | SOLAR_FIELDS 3 days ago | parent | prev | next [-] | | The thing about Kubernetes is its a standardization of deployment. Kubernetes is complicated because deploying software is complicated. You might try to YAGNI hand wave it away, but as the article points out, over time, you end up building Kubernetes anyway | |
| ▲ | echelon 4 days ago | parent | prev [-] | | You can use k8s on $2/mo digital ocean projects. It probably even works on the free tier of a lot of providers. And there's zero setup. Just a deployment yaml that specifies exactly what you want deployed, which has the benefit of easy version control. I don't get why people are so bent on hating Kubernetes. The mental cost to deploy a 6-line deployment yaml is less than futzing around with FTP and nginx. Kube is the new LAMP stack. It's easier too. And portable. If you're talking managed kube vs one you're taking the responsibility of self-managing, sure. But that's no different than self-managing your stack in the old world. Suddenly you have to become Sysadmin/SRE. | | |
| ▲ | throwawaypath 4 days ago | parent | next [-] | | >And portable. This made me audibly guffaw. Kubernetes is a lot of things, but "portable" is not one of them. GKE, EKS, AKS, OCP, etc., portability between them is nowhere near guaranteed. | | |
| ▲ | cortesoft 4 days ago | parent [-] | | It is if you stick to standard Kubernetes resources, and it has gotten even easier with better storage class and load balancer support. All of the cloud providers now give you default storage classes and ingresses when you provision a cluster on them, so you can use the exact same deployment on any of them an automatically get those things provisioned in the right way out of the box. | | |
| ▲ | throwawaypath 4 days ago | parent | next [-] | | >It is if you stick to standard Kubernetes resources "If you stick to standard C..." No one does, that's the issue. Helm charts that only support certain cloud providers, operators and annotations that end up being platform specific, etc. >now give you default storage classes and ingresses Ingress is being deprecated, it's Gateway now! Welcome to hell, er, Kubernetes. | | |
| ▲ | vbezhenar 4 days ago | parent | next [-] | | > Ingress is being deprecated Do you have any links about Ingress being deprecated? Official docs here: https://kubernetes.io/docs/reference/kubernetes-api/service-... There are no mentions about this API being deprecated. | | | |
| ▲ | chuckadams 4 days ago | parent | prev | next [-] | | Ingress is frozen, not deprecated. Gateway does more, but Ingress isn’t going anywhere. It’s a stable API, which is the opposite of churn. | | |
| ▲ | physicsguy 4 days ago | parent | next [-] | | Til there's a security issue, right? Nginx is a big target. | | |
| ▲ | chuckadams 3 days ago | parent [-] | | The API of Ingress is not Nginx's API. The spec itself is basically a yaml schema, it's hard to have a vulnerability in that. | | |
| ▲ | physicsguy 2 days ago | parent [-] | | There have been critical vulns in nginx-ingress (the part which is deprecated) like this: https://kubernetes.io/blog/2025/03/24/ingress-nginx-cve-2025... If you're using it after it's dead, you're at risk of further problems of this nature that aren't in the underly nginx reverse proxy but in the code wrapping it. | | |
| ▲ | chuckadams 2 days ago | parent [-] | | That's one reason I've always used Traefik as my Ingress (I work mostly with K3S, which uses it by default). Which appears to have had its own security issues too, but it still looks like an implementation issue, not a weakness designed in by the spec. On EKS I'm using whatever AWS has brewed up to integrate ELB/ALB, but I'll tend to trust it ... though maybe I shouldn't, given all the troubles I have with other integrations like secrets management. |
|
|
| |
| ▲ | SOLAR_FIELDS 3 days ago | parent | prev [-] | | Would love to use Gateway! Every time I spin up a new cluster it goes like this: - New cluster setup, time to use gateway! Yay! - Oh crap, like 80% of the helm chart and other existing configurations I need for the softwares I'm trying to deploy STILL doesn't use gateway, this new API that's been out for... like half a decade at least. - Even core networking things like Istio/Envoy only have limited gateway support compared to ingress - Sigh. Ingress again. It's been like this since gateway's inception and every time I check the needle has moved like 2% towards gateway. So I'm looking forward to year 2050 when I can use gateway! The problem, as CNCF knows, if they pushed Gateway and deprecated ingress the world would revolt due to the amount of work involved to migrate stuff. Therefore, they leave it up to "the people" to do the extra work themselves, who have no incentive to do so since for many usecases it's not materially better. |
| |
| ▲ | cortesoft 4 days ago | parent | prev [-] | | I use Kubernetes every day, and have worked with dozens of helm charts, and have yet to encounter cloud specific helm charts. Are these internal helm charts for your company? Obviously you can lock yourself in if you choose, but I have yet to see third party tools that assume a specific provider (unless you are using tools created BY that provider). At my previous spot, we were running dozens of clusters, with some on prem and some in the cloud. It was easy to move workloads between the two, the only issue was ACLs, but that was our own choice. I know they are pushing the new gateway api, but ingresses still work just fine. | | |
| ▲ | throwawaypath 2 days ago | parent [-] | | Tell me you haven't managed Kubernetes at scale without telling me you haven't managed Kubernetes at scale. Helm charts may not support a cloud platform like Rancher, Azure, etc. or may have platform specific issues. First one I checked: https://docs.jfrog.com/installation/docs/helm-chart-requirem... "When deploying a JFrog application on an AWS EKS cluster, the AWS EBS CSI Driver is required for dynamic volume provisioning. However, this driver is not included in the JFrog Helm Charts." "JFrog validates compatibility with core Kubernetes distributions. Some Kubernetes vendors apply additional logic or hardening (for example, Rancher), so JFrog Platform deployment on those vendor-specific distributions might not be fully supported." |
|
| |
| ▲ | SOLAR_FIELDS 3 days ago | parent | prev [-] | | I'm a Kubernetes user and advocate but to call it "portable" just tells me you've never actually tried to deploy the similar thing on multiple different clouds. Even the standardized kubernetes resources behave differently due to various cloud idiosyncracies. You can of course make the situation easier, but to call it entirely portable is probably a misnomer. |
|
| |
| ▲ | gzread 4 days ago | parent | prev | next [-] | | > And there's zero setup. Just a deployment yaml that specifies exactly what you want deployed Writing this yaml is hours and hours of setup if you can't ctrl+c/v from your last project | | |
| ▲ | turtlebits 4 days ago | parent [-] | | AI is especially good at writing IaC. Most small projects I let it write a Dockerfile too. |
| |
| ▲ | subhobroto 4 days ago | parent | prev [-] | | > Suddenly you have to become Sysadmin/SRE I don't you made that argument but could a valid conclusion of your comment be that, because Kubernetes is so ubiquitous, using it frees you from being a Sysadmin/SRE? | | |
| ▲ | 0x457 4 days ago | parent [-] | | Frees you from being a sysadmin, but burndens you with being a k8s operator, still an SRE. |
|
|
|
|
| ▲ | Kinrany 4 days ago | parent | prev | next [-] |
| If you can solve the same problem in a simpler way without using k8s, that means k8s is not a zero cost abstraction. It's not obligated to be, but it's also obvious why people would want it to be. |
| |
| ▲ | cortesoft 4 days ago | parent [-] | | > If you can solve the same problem in a simpler way without using k8s I think I disagree with this, or at least the implication. I think it is true you can solve EACH OF THISE PROBLEMS INDIVIDUALLY in a simpler way than Kubernetes, the fact that you are going to have to solve at least 5-10 of those problems individually makes the sum total more complicated than Kubernetes, not to mention bespoke. The Kubernetes solutions are all designed to work together, and when they fail to work together, you are more likely to find answers when you search for it because everyone is using the same thing. I think it is fair to say k8s is not a zero cost abstraction, but nothing you use instead is going to be, either, and when you do run into a situation where that abstraction breaks, it will be easier to find a solution for kubernetes than it will for the random 5 solutions you pieced together yourself. |
|
|
| ▲ | ohNoe5 4 days ago | parent | prev | next [-] |
| Ephemeral user accounts were agreed upon before that. The OG container Docker and k8s are just wrappers around namespaces, cgroups, file system ACLs, some essential cli commands, which can also be configured per user. We may be headed back there. Have seen some experiments leveraging Linux kernels BPF and sched_ext to fire off just the right sized compute schedule in response to sequences of specific BPF events. Future "containers" may just be kernel processes and threads... again. Especially if enough human agency looks away from software as AI makes employment for enough people untenable. Why would those who remain want to manage kernels and k8s complexity? Imo its less we agreed on k8s specifically and more we agreed to let people use all the free money to develop whatever was believed to make the job easier; but if the jobs go away then it's just more work for the few left |
| |
| ▲ | majormajor 4 days ago | parent | next [-] | | > Docker and k8s are just wrappers around namespaces, cgroups, file system ACLs, some essential cli commands, which can also be configured per user. Docker, yes, but kubernetes is way more than that the instant you have more than one physical machine node. (If you only have one node in any deploy, sure, it's likely overkill, but that seems like a weird enough case to not be worth too much ink.) If you silently replaced all my container images with VM images and nodes running containers with nodes running VMs, I think the vast majority of all my Kubernetes setup would be essentially unchanged. Heck, replace it all with people with hands on keyboard in a datacenter running around frantically bringing up new physical servers, slapping hard drives in them, and re-configuring the network, and I don't think the user POV of how to describe it would change that much. | | |
| ▲ | foobarian 4 days ago | parent [-] | | > nodes running VMs, huh, but how would bursting work then? Do VMs support it nowadays? | | |
| ▲ | majormajor 4 days ago | parent [-] | | I've seen some places advertise it but I have not tried it. But, honestly, more generally in my head I wasn't thinking much about it since I consider that as a "cost optimization" thing than a "core kubernetes function." E.g. the addition (or not) of limits is just a couple lines, compared to all the rest of the stuff that I'd be managing specification of (replicas, environment, resource baseline, scheduling constraints, deployment mode...) that would translate seamlessly. (And there are a lot of parts of kubernetes that annoy me, especially around the hoops it puts up to customize certain things if you reaalllly actually need to, but it would never cross my mind in a hundred years to characterize it as just a wrapper around cgroups etc like the OP.) |
|
| |
| ▲ | xyzzy_plugh 4 days ago | parent | prev [-] | | Something often underappreciated is that, in the possible future you're describing, you can use all of these new fangled "what's old is new again" approaches by continuing to just use Kubernetes. Kubernetes is, in a way, designed to replace itself. | | |
| ▲ | ohNoe5 4 days ago | parent [-] | | Kubernetes is software. It cannot do anything "itself" let alone "replace itself". Don't anthropomorphize software Inevitably it will be a human replacing it with what they define is the best method |
|
|
|
| ▲ | troupo 4 days ago | parent | prev | next [-] |
| https://x.com/livingdevops/status/2034957580750266632 "Kubernetes is beautiful. Every Concept Has a Story, you just don't know it yet... So you use a Deployment... So you use a Service... So you use Ingress... So..." (Full text at the link) |
|
| ▲ | chillfox 4 days ago | parent | prev | next [-] |
| lol, the big problem with kubernetes is that none of the choices have been made, it's not opinionated at all, there's no conventions. It's all configuration and choices all the way down. There's way too much yaml, and way to many choices for ever tiny component, it's just too much. I do run a k3s cluster for home stuff...
But I really wish I could get what it provides in a much simpler solution. My dream solution would effectively do the same as k3s + storage, but with a much simpler config, zero yaml, zero choices for components, very limited configuration options, it should just do the right ting by default.
Storage (both volume and s3), networking, scale to zero, functions, jobs, ingress, etc... should all just be built in. |
| |
| ▲ | turtlebits 4 days ago | parent | next [-] | | You're going to have to write some sort of config. It not being opinionated it a good thing. It lets you deploy just about anything under the sun. | | |
| ▲ | chillfox 3 days ago | parent [-] | | Well... we have k8s for that... I do not wish to take k8s away from those who like it, I am asking for a new solution that's very opinionated, and as close to zero config as practical. |
| |
| ▲ | waterproof 4 days ago | parent | prev [-] | | In what cases do you need autoscaling on your home stuff? | | |
| ▲ | chillfox 4 days ago | parent [-] | | I have limited ram and want scale to zero for apps that use a lot of ram, but I only use one of at a time like game servers, or things that can be done over night while I sleep like media encoding. The main reason I went to k8s, is for the not having to think about what machine will have enough resources to run an app, just throw it at the cluster and it figures out where there's capacity.
And, I want hardware failing/getting replaced to be a non issue. edit: I wanted to add that my hobby is not systems admin, I want it to be as hands off as possible. Self-hosting is a means to an end. I have so far saved over $200/month in subscriptions by replacing subscriptions I was using with self-hosted alternatives. I can now use that money on my actual hobbies. |
|
|
|
| ▲ | dev_l1x_be 4 days ago | parent | prev | next [-] |
| My take is that k8s the idea is okish the implementation on the other hand not that much. This comes down to that fact that software architecture is not great in general and very few people care about simplicity. |
|
| ▲ | vbezhenar 4 days ago | parent | prev | next [-] |
| Yeah, I spent quite a bit of time learning Kubernetes, but now I'd use it to host a static webpage on a single server, over alternatives. It's so awesome. |
| |
| ▲ | zmmmmm 4 days ago | parent | next [-] | | The question is, how do we outsiders differentiate Stockholm syndrome from something truly being awesome? | |
| ▲ | actionfromafar 4 days ago | parent | prev [-] | | This is truly interesting to me. Why? | | |
| ▲ | cortesoft 4 days ago | parent | next [-] | | I am not the person you asked this question to, but I would probably do the same so I will answer: Once you get used to it, it just makes managing things simple if you always use it for everything. I have a personal harbor service that I run on my local cluster that has all my helm charts and images, and i can run a single script that sets up my one node cluster, then run a helm install that installs cert-manager and my external-dns, and now I can deploy my app with whatever subdomain I want and I immediately get DNS set up and certs automatically provisioned and rotated. It will just work. | |
| ▲ | vbezhenar 4 days ago | parent | prev [-] | | 1. Assuming managed service, it frees me from host OS management. So basically the same proposition, as good old "PHP+MySQL" hosters. You upload your website, they make sure it works. But without limitations and with much better independence. 2. It allows me to configure everything using standard manifests. I need to provision the cluster itself initially, then everything could be done with gitops of various automation levels. I don't need to upload my pages via FTP. My CI will build OCI image, publish it to some registry, then I'll change image tag of my deployment and it'll be updated. 3. It allows to start simple, and extend seamlessly in the future. I can add new services. I can add new servers. I can add new replicas of existing services. I can add centralized logging, metrics, alerts. It'll get more complicated but I can manage the complexity and stop where I feel comfotable. 4. One big thing that's solved even with the simplest Kubernetes deployment is new version deployment with zero downtime. When I'll update image tag of my deployment, by default kubernetes will start new pod, will wait for it to answer to liveness checks, then redirect traffic to new pod, let old pod to gracefully stop and then remove it. With every alternative technology, configuring the same requires quite a bit of friction. Which naturally restricts you to deploy new versions only at blessed times. With Kubernetes, I started to trust it enough, I don't care about deployment time, I can deploy new version of heavily loaded service in the middle of the day and nobody notices. 5. There are various "add-ons" to Kubernetes which solve typical issues. For example Ingress Controller allows the developer to describe Ingress of the application. It's a set of declarative HTTP routes which will be visible outside and which will be reverse-proxied to the service inside. Simplest route is https://www.example.com/ -> http://exampleservice:8080, but there's a lot more to it, basically you can think about it as nginx config done differently. Another example is certificate manager, you install it, you configure it once to work with letsencrypt and you forget about TLS, it just works. Another example is various database controllers, for example cloudnativepg allows you to declaratively describe postgres. Controller will create pod for database, will initialize it, will create second pod, will configure it as replica, will perform continuous backup to S3, will monitor its availability and switch master to replica if necessary, will handle database upgrades. A lot of moving parts (which might be scary, tbh), all driven by a simple declarative configuration. Another example is monitoring solutions, which allow to install prometheus instance and configure it to capture all metrics from everything in cluster along with some useful charts in grafana, all with very little configuration. 6. There are various "packages" for Kubernetes which essentially package some useful software, usually in a helm charts. You can think about `apt-get` but for a more complicated set of services, mostly pre-configured and typically useful for web applications. The examples above are all installable with helm, but they add new kubernetes manifest types, which is why I called them "add-ons", but there are also simpler applications. Just for the record, I don't suggest that to everyone. I spent quite a bit of time tinkering with Kubernetes. It definitely brings a lot of gotchas for a new user and it also requires quite a bit of self-restrictions for experienced users to not implement every devops good practice in the world. Sometimes maybe you don't even want to start with ingress, I saw cluster which used manually configured nginx reverse proxy instead and it worked for them. You can be very simple with Kubernetes. |
|
|
|
| ▲ | PunchyHamster 4 days ago | parent | prev | next [-] |
| Honestly the main problem is people using k8s for something that's like... a database, and an app, and maybe a second app, that all could be containers or just a systemd service. And then they hit all the things that make sense in big company with like 40 services but very little in their context and complain that complex thing designed for complex interactions isn't simple |
| |
| ▲ | nazcan 4 days ago | parent | next [-] | | But if you want some redundancy, k8s let's you just say run 4 of this, 6 of this on these 3 machines. At least I find it quite straight forward. The database is more complex since there is storage affinity (I use cockroachDB with local persistent volumes for it) - but stateful is always complicated. | | |
| ▲ | tarkin2 4 days ago | parent [-] | | Most of the time you don't need redundancy. You need regular backups for exceptional circumstances. And k8s gives you more complexity, and more problems through more moving parts, to give you the possibility of using a feature you'll never need, and if you do start to use it it'll probably be instead of fixing performance problems downstream | | |
| ▲ | cortesoft 4 days ago | parent [-] | | Are we talking for personal projects where there are no expectations, or small startups where you don’t have much scale but you still care about down time and data loss? Personal projects are one thing, but even the smallest startup wants to be able to avoid data loss and downtime. If you are running everything on one server, how do you do kernel patches? You need to be able to move your workload to another server to reboot for that, even if you don’t want redundancy. Kubernetes does this for you. Bring in another node, drain one (which will start up new instances on the new node and shift traffic before bringing down the other instance, all automatically for you out of the box), and then reboot the old one. Again, you could do all of this with other tech, but it is just standard with Kubernetes. | | |
| ▲ | KronisLV 4 days ago | parent | next [-] | | > but even the smallest startup wants to be able to avoid data loss Seems true at a glance! > and downtime. Maybe less so - I think there’s plenty out there, where they’re not chasing nines and care more about building software instead of some HA setup. Probably solve that issue when you have enough customers to actually justify the engineering time. A few minutes of downtime every now and then isn’t the end of the world if it buys you operational simplicity. | |
| ▲ | nazcan 2 days ago | parent | prev [-] | | Agreed. Upgrading just one piece, and ensuring every committed write survives is critical in most commercial applications. |
|
|
| |
| ▲ | jmalicki 4 days ago | parent | prev [-] | | Luckily since I met this guy named Claude most of that complexity has gone away. | | |
| ▲ | andai 4 days ago | parent [-] | | A while back when the agents got hyped I was looking into the whole "give it a VM / docker container" I realized the safest and simplest option was just to give it its own machine. Then I realized giving it root on a $3 VPS is functionally equivalent. If it blows it up, you just reset the VM. It sounds bad but I can't see an actual difference. |
|
|
|
| ▲ | subhobroto 4 days ago | parent | prev [-] |
| > This is VERY powerful No argument there. The Toyota 5S-FE non-interference engine is a near indestructible 4 cylinder engine that's well documented, popular and you can purchase parts for pennies. It has powered 10 models of Camrys and Lexus and battle proven. You can expect any mechanic who has been a professional mechanic for the last 3 years know exactly what to do when it starts acting up. 1 out of 4 cars on the road have this engine or a close clone of it. It's not what any reasonable person would use for a weedwhacker, lawnmower, pool pump or an air compressor. |
| |
| ▲ | cortesoft 4 days ago | parent [-] | | Sure, but to extend your metaphor, Kubernetes HAS smaller engine models that you can use in those situations, and still gain all the benefits of being in the same ecosystem. You can use K3s, for example, and get all the benefits without having a giant engine in your weedwhacker. |
|