Remix.run Logo
PunchyHamster 4 days ago

Honestly the main problem is people using k8s for something that's like... a database, and an app, and maybe a second app, that all could be containers or just a systemd service.

And then they hit all the things that make sense in big company with like 40 services but very little in their context and complain that complex thing designed for complex interactions isn't simple

nazcan 4 days ago | parent | next [-]

But if you want some redundancy, k8s let's you just say run 4 of this, 6 of this on these 3 machines. At least I find it quite straight forward.

The database is more complex since there is storage affinity (I use cockroachDB with local persistent volumes for it) - but stateful is always complicated.

tarkin2 4 days ago | parent [-]

Most of the time you don't need redundancy. You need regular backups for exceptional circumstances. And k8s gives you more complexity, and more problems through more moving parts, to give you the possibility of using a feature you'll never need, and if you do start to use it it'll probably be instead of fixing performance problems downstream

cortesoft 4 days ago | parent [-]

Are we talking for personal projects where there are no expectations, or small startups where you don’t have much scale but you still care about down time and data loss?

Personal projects are one thing, but even the smallest startup wants to be able to avoid data loss and downtime. If you are running everything on one server, how do you do kernel patches? You need to be able to move your workload to another server to reboot for that, even if you don’t want redundancy. Kubernetes does this for you. Bring in another node, drain one (which will start up new instances on the new node and shift traffic before bringing down the other instance, all automatically for you out of the box), and then reboot the old one.

Again, you could do all of this with other tech, but it is just standard with Kubernetes.

KronisLV 4 days ago | parent | next [-]

> but even the smallest startup wants to be able to avoid data loss

Seems true at a glance!

> and downtime.

Maybe less so - I think there’s plenty out there, where they’re not chasing nines and care more about building software instead of some HA setup. Probably solve that issue when you have enough customers to actually justify the engineering time. A few minutes of downtime every now and then isn’t the end of the world if it buys you operational simplicity.

nazcan 2 days ago | parent | prev [-]

Agreed. Upgrading just one piece, and ensuring every committed write survives is critical in most commercial applications.

jmalicki 4 days ago | parent | prev [-]

Luckily since I met this guy named Claude most of that complexity has gone away.

andai 4 days ago | parent [-]

A while back when the agents got hyped I was looking into the whole "give it a VM / docker container" I realized the safest and simplest option was just to give it its own machine.

Then I realized giving it root on a $3 VPS is functionally equivalent. If it blows it up, you just reset the VM.

It sounds bad but I can't see an actual difference.