| ▲ | kg 4 hours ago | ||||||||||||||||||||||
> etcd is a strongly consistent, distributed key-value store, and that consistency comes at a cost: it is extraordinarily sensitive to I/O latency. etcd uses a write-ahead log and relies on fsync calls completing within tight time windows. When storage is slow, even intermittently, etcd starts missing its internal heartbeat and election deadlines. Leader elections fail. The cluster loses quorum. Pods that depend on the API server start dying. This seems REALLY bad for reliability? I guess the idea is that it's better to have things not respond to requests than to lose data, but the outcome described in the article is pretty nasty. It seems like the solution they arrived at was to "fix" this at the filesystem level by making fsync no longer deliver reliability, which seems like a pretty clumsy solution. I'm surprised they didn't find some way to make etcd more tolerant of slow storage. I'd be wary of turning off filesystem level reliability at the risk of later running postgres or something on the same system and experiencing data loss when what I wanted was just for kubernetes or whatever to stop falling over. | |||||||||||||||||||||||
| ▲ | PunchyHamster 34 minutes ago | parent | next [-] | ||||||||||||||||||||||
> This seems REALLY bad for reliability? I guess the idea is that it's better to have things not respond to requests than to lose data, but the outcome described in the article is pretty nasty. It is. Because if it really starts to crap out above 100ms just a small hiccup in network attached storage of VM it is running on cam but it's not as simple as that, if you have multiple nodes, and one starts to lag, kicking it out is only way to keep the latency manageable. Better solution would be to keep cluster-wide disk latency average and only kick node that is slow and much slower than other nodes; that would also auto tune to slow setups like someone running it on some spare hdds at homelab | |||||||||||||||||||||||
| ▲ | landl0rd 32 minutes ago | parent | prev | next [-] | ||||||||||||||||||||||
CAP theorem goes brrr. This is CP. ZooKeeper gives you AP. Postgres (k3s/kine translation layer) gives you roughly CA, and CP-ish with synchronous streaming replication. If you run this on single-tenant boxes that are set up carefully (ideally not multi-tenant vCPUs, low network RTT, fast CPU, low swappiness, `nice` it to high I/O priority, `performance` over `ondemand` governor, XFS) it scales really nicely and you shouldn't run into this. So there are cases where you actually do want this. A lot of k8s setups would be better served by just hitting postgres, sure, and don't need the big fancy toy with lots of sharp edges. It's got raison d'etre though. Also you just boot slow nodes and run a lot. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | denysvitali 3 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
Yes, wouldn't their fix likely make etcd not consistent anymore since there's no guarantee that the data was persisted on disk? | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | api 21 minutes ago | parent | prev [-] | ||||||||||||||||||||||
That’s a design issue in etcd. | |||||||||||||||||||||||