▲ | raffraffraff 11 hours ago | |
I'm sure you can get some of the handy extras that come with a typical kubernetes deployment without the kubernetes, but overall I'll take kubernetes + cloud. Once you've got the hang of it, it's ok. I have a terraform project that deploys clusters with external-dns, external-secrets, cert-manager, metrics, monitoring stack, scalers and FluxCD. From there, pretty much everything else is done via FluxCD (workloads, dashboards, alerts). And while I detest writing helm charts (and sometimes using them, as they can get "stuck" in several ways) they do allow you to wrap up a lot of the kubernetes components into a single package that accepts more-or-less standardized yaml for stuff like resource limits, annotations (eg for granting and AWS role to a service) etc. And FluxCD .postBuild is extremely handy for defining environment vars to apply to more generic template yaml, so we avoid a sprawl. So much so that I am the one-man-band (Sys|Dev|Sec)Ops for our small company, and that doesn't give me panic attacks. The cloud integration part can be hairy but I have terraform patterns that, once worked out, are cookie cutter. With cloud kubernetes, I can imagine starting from scratch, taking a wrong turn and ending up in hell. But I'm exchanging one problem set for another. Having spent years managing fleets of physical and virtual servers, I'm happier and more productive now. I never need to worry about building systems or automation for doing OS build / patching, config management, application packaging and deployment, secrets management, service discovery, external DNS, load balancering, TSL certs etc. Because while those are just "words" now, back then each one was a huge project involving multiple people fighting over "CentOS Vs Ubuntu", "Puppet Vs Ansible", "RPMs Vs docker containers", "Patching Vs swapping AMIs". If you're using Consul and Vault, good luck - you have to integrate all of that into whatever mess you've built, and you'll likely have to write puppet code and scripts to hook it all up together. I lost a chunk of my life writing 'dockerctl' and a bunch of puppet code that deployed it so it could manage docker containers as systemd services. Then building a vault integration for that. It worked great across multiple data centers, but took considerable effort. And in the end it's a unique snowflake used by exactly one company, hardly documented and likely full of undiscovered bugs and race conditions even after all the hard work. The time it took to onboard new engineers was considerable and it took time away from an existing engineer. And we still had certificates expire in production. |