Remix.run Logo
oron 13 hours ago

I just use a single k3s install on a single bare metal from Hetzner or OVH, works like charm, very clean deployments, much more stable than docker-compose and 1/10 of the cost of AWS or similar.

p_l 4 hours ago | parent | next [-]

Doing the same, grabbed a reasonably cheap Ryzen (zen2) server with 64GB ECC and 4x NVMe SSDs (2x 512G + 2x 1024G).

Runs pretty much this stack:

  "Infrastructure":

  - NixOS with ZFS-on-Linux for as 2 mirrors on the NVMes 
  - k3s (k8s 1.31)
  - openebs-zfs provisioner (2 storage classes, one normal and one optimized for postgres)
  - cnpg (cloud native postgres) operator for handling databases
  - k3s' built-in traefik for ingress
  - tailscale operator for remote access to cluster control plane and traefik dashboard
  - External DNS controler to automate DNS
  - Certmanager to handle LetsEncrypt
  - Grafana cloud stack for monitoring. (metrics, logs, tracing)

  Deployed stuff:
  - Essentially 4 tenants right now
  - 2x Keycloak + Postgres (2 diff. tenants)
  - 2x headscale instances with postgres (2 diff. tenants, connected to keycloak for SSO)
  - 1 Gitea with Postgres and memcached (for 1 tenant)
  - 3 postfix instances providing simple email forwarding to sendgrid (3 diff. tenants)
  - 2x dashy as homepage behind SSO for end users (2 tenants)
  - 1x Zitadel with Postgres (1 tenant, going to migrate keycloaks to it as shared service)
  - Youtrack server (1 tenant)
  - Nextcloud with postgres and redis (1 tenant)
  - tailscale-based proxy to bridge gitea and some machines that have issues getting through broken networks
Plus few random things that are musings on future deployments for now.

The server is barely loaded and I can easily clone services around (in fact a lot of the services above? instantiated from jsonnet templates).

Deploying some stuff was more annoying than doing it by hand from shell (specifically nextcloud) but now I have replicable setup, for example if I decide to move from host to host.

Biggest downtime ever was dealing with not well documented systemd-boot behaviour which caused the server to revert to older configuration and not apply newer ones.

usrme 11 hours ago | parent | prev [-]

Do you have a write-up about this that you have to share, even if it's someone else's? I'd be curious to try this out.

fernandotakai 8 hours ago | parent | next [-]

i was actually playing with hetzner and k3s over the weekend and found this https://github.com/vitobotta/hetzner-k3s to be super useful.

globular-toast 7 hours ago | parent | prev [-]

I've done this but on EC2. What would you like to know? Installing K3s on a single node is trivial and at that point you have a fully functional K8s cluster and API.

I have an infrastructure layer that I apply to all clusters that includes things like cert-manager, an ingress controller and associated secrets. This is all cluster-independent stuff. Then some cluster-dependent stuff like storage controllers etc. I use flux to keep this stuff under version control and automatically reconciled.

From there you just deploy your app with standard manifests or however you want to do it (helm, kubectl, flux, whatever).

It all works wonderfully. The one downside is all the various controllers do eat up a fair amount of CPU cycles and memory. But it's not too bad.