Remix.run Logo
tapoxi 13 hours ago

This read as "old man yells at cloud" to me.

I've managed a few thousand VMs in the past, and I'm extremely grateful for it. An image is built in CI, service declares what it needs, the scheduler just handles shit. I'm paged significantly less and things are predictable and consistent unlike the world of VMs where even your best attempt at configuration management would result in drift, because the CM system is only enforcing a subset of everything that could go wrong.

But yes, Kubernetes is configured in YAML, and YAML kind of sucks, but you rarely do that. The thing that changes is your code, and once you've got the boilerplate down CI does the rest.

tombert 2 hours ago | parent | next [-]

> But yes, Kubernetes is configured in YAML, and YAML kind of sucks, but you rarely do that.

I'm sorry, citation needed on that. I spend a lot of time working with the damn YAML files. It's not a one-off thing for me.

You're not the first person to say this to me, they say "you rarely touch the YAML!!!", but then I look at their last six PRs, and each one had at least a small change to the YAML setup. I don't think you or they are lying, I think people forget how often you actually have to futz with it.

raxxorraxor 6 hours ago | parent | prev | next [-]

I think it is also a difference between developer and IT. Usually the requirements don't ask for thousands of VMs if you don't run some kind of data center or a company that specializes on software services.

catdog 13 hours ago | parent | prev | next [-]

YAML is fine, esp. compared to the huge collection of often 10x worse config formats you have to deal with in the VM world.

cess11 11 hours ago | parent | prev | next [-]

I'd prefer Ansible if I was running VM:s. Did that at a gig, controlled a vCenter cluster and hundreds of machines in it, much nicer experience than Kubernetes style ops. Easier to do ad hoc troubleshooting and logging for one.

vrighter 10 hours ago | parent [-]

until, as happened to us, you're in the middle of an upgrade cycle, with a mix of red-hat 6 and red-hat 8 servers, and ansible decide to require support for the latest available version of python on red-hat 8, which isn't available on red-hat 6, so we have no way of using ansible to manage both sets of servers.

The python ecosystem is a cancer.

mkesper 8 hours ago | parent | next [-]

Well, you were free to install a version of Python3 on the CentOS6 machines, that's what we ended up doing and using for Ansible. Python 2.6 support of Ansible was a bad lie, multiple things broke already. 10 years of support without acknowledging changes of ecosystem just don't work.

vrighter 5 hours ago | parent [-]

we did, but then most ansible modules still didn't work on the system. They advertise the no-agent thing and how it does everything over ssh, and instead require python to be installed on all your servers because it generates python code and runs it on the remote machine. And somemodules require specific versions sometimes.

cess11 10 hours ago | parent | prev [-]

Sure, and I'm also not a fan of RedHat. We ran Ubuntu and Debian on that gig, the few Python issues we ran into we could fix with some package pinnings and stuff like that.

mollusk 11 hours ago | parent | prev [-]

> "old man yells at cloud"

Literally.