| ▲ | szundi 4 hours ago | ||||||||||||||||
It would have helped if you tell us why you don’t like this approach. | |||||||||||||||||
| ▲ | zsoltkacsandi 4 hours ago | parent [-] | ||||||||||||||||
It's right there: > the source of most of the problems I've seen with infrastructures using Kubernetes came from exactly this kind of approach But some more concrete stories: Once, while I was on call, I got paged because a Kubernetes node was running out of disk space. The root cause was the logging pipeline. Normally, debugging a "no space left on device" issue in a logging pipeline is fairly straightforward, if the tools are used as intended. This time, they weren't. The entire pipeline was managed by a custom-built logging operator, designed to let teams describe logging pipelines declaratively. The problem? The resource definitions alone were around 20,000 lines of YAML. In the middle of the night, I had to reverse-engineer how the operator translated that declarative configuration into an actual pipeline. It took three days and multiple SREs to fully understand and fix the issue. Without such a declarative magic it takes usually 1 hour to solve such an issue. Another example: external-dns. It's commonly used to manage DNS declaratively in Kubernetes. We had multiple clusters using Route 53 in the same AWS account. Route 53 has a global API request limit per account. When two or more clusters tried to reconcile DNS records at the same time, one would hit the quota. The others would partially fail, drift out of sync, and trigger retries - creating one of the messiest cross-cluster race conditions I've ever dealt with. And I have plenty more stories like these. | |||||||||||||||||
| |||||||||||||||||