| ▲ | silasb 11 hours ago | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I'm not trying to take a shot at the OP, but I keep seeing posts labeled "Production-Grade" that still look more like pet systems than cattle. I'm struggling to understand how something like this can be reproduced consistently across environments. How would you package this inside a Git repo? Can it be managed through GitOps? And if we're calling something production-grade, high availability should be a baseline requirement since it's table stakes for modern production applications. What I'd really love is a middle ground between k8s and Docker Swarm that gives operators and developers what they need while still providing an escape hatch to k8s when required. k8s is immensely powerful but often feels like overkill for teams that just need simple orchestration, predictable deployments, and basic resiliency. On the other hand, Swarm is easy to use but doesn't offer the extensibility, ecosystem, or long-term viability that many organizations now expect. It feels like there's a missing layer in between: something lightweight enough to operate without a dedicated platform team, but structured enough to support best practices such as declarative config, GitOps workflows, and repeatable environments. As I write this, I'm realizing that part of the issue is the increasing complexity of our services. Every team wants a clean, Unix-like architecture made up of small components that each do one job really well. Philosophically that sounds great, but in practice it leads to a huge amount of integration work. Each "small tool" comes with its own configuration, lifecycle, upgrade path, and operational concerns. When you stack enough of those together, the end result is a system that is actually more complex than the monoliths we moved away from. A simple deployment quickly becomes a tower of YAML, sidecars, controllers, and operators. So even when we're just trying to run a few services reliably, the cumulative complexity of the ecosystem pushes us toward heavyweight solutions like k8s, even if the problem doesn't truly require it. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | exceptione 9 hours ago | parent | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
Maybe this is what you mean:https://docs.podman.io/en/latest/markdown/podman-kube.1.html
Here you go, linked from the first pagehttps://docs.podman.io/en/latest/markdown/podman-kube-genera... Podman has an option to play your containers on CRI-O as well, which is a minimal but K8s compliant runtime. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | yrxuthst 10 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
I have not used quadlets in a "real" production environment but deploying systemd services is very easy to automate with something like Ansible. But I don't see this as a replacement for k8s as a platform for generic applications, more for deploying a specific set of containers to a fleet of servers with less overhead and complexity. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | MikeKusold 7 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> I'm struggling to understand how something like this can be reproduced consistently across environments. How would you package this inside a Git repo? Can it be managed through GitOps? I manage my podman containers the way the article describes using NixOS. I have a tmpfs root that gets blown away on every reboot. Deploys happen automatically when I push a commit. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | xomodo 8 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> How would you package this inside a Git repo? There are many ways to do that. Start with a simple repo and spin up a VM instance from the cloud provider of your choice. Then integrate the commands from this article into a cloud-init configuration. Hope you get the idea. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | 9 hours ago | parent | prev | next [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
| [deleted] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| ▲ | xienze 8 hours ago | parent | prev [-] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||
> I'm struggling to understand how something like this can be reproduced consistently across environments. How would you package this inside a Git repo? Very easily. At the end of the day, quadlets (which are just systemd services) are just text files. You can use something like cloud-init to define all these quadlets and enable them in a single yaml file and do a completely unattended install. I do something similar to cloud-init using Flatcar Linux. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||