| ▲ | themgt 12 hours ago | |
Overall I like Dagger conceptually, but I wish they'd start focusing more on API stability and documentation (tbf it's not v1.0). v0.19 broke our Dockerfile builds and I don't feel like figuring out the new syntax atm. Having to commit dev time to the upgrade treadmill to keep CI/CD working was not the dream. re: the cloud specifically see these GitHub issues: https://github.com/dagger/dagger/issues/6486 https://github.com/dagger/dagger/issues/8004 Basically if you want consistently fast cached builds it's a PITA and/or not possible without the cloud product, depending on how you set things up. We do run it self-hosted though, YMMV. | ||
| ▲ | pxc 10 hours ago | parent [-] | |
One thing that I liked about switching from a Docker-based solution like Dagger to Nix is that it relaxed the infrastructure requirements to getting good caching properties. We used Dagger, and later Nix, mostly to implement various kinds of security scans on our codebases using a mix of open-source tools and clients for proprietary ones that my employer purchases. We've been using Nix for years now, and still haven't set up any of our own binary cache. But we still have mostly-cached builds thanks to the public NixOS binary cache, and we hit that relatively sparingly because we run those jobs on bare metal in self-hosted CI runners. Each scan job typically finishes in less than 15 seconds once the cache is warm, and takes up to 3 minutes when the local cache is cold (in case we build a custom dependency). Some time in the next quarter or two I'll finish our containerization effort for this so that all the jobs on a runner will share a /nix/store and Nix daemon socket bind-mounted from the host, so we can have relatively safe "multi-tenant" runners where all jobs run under different users in rootless Podman containers while still sharing a global cache for all Nix-provided dependencies. Then we get a bit more isolation and free cleanup for all our jobs but we can still keep our pipelines running fast. We only have a few thousand codebases, so a few big CI boxes should be fine, but if we ever want to autoscale down, it should be possible to convert such EC2 boxes into Kubernetes nodes, which would be a fun learning project for me. Maybe we could get wider sharing that way and stand up fewer runner VMs. Somewhere on my backlog is experimenting with Cachix, so we should get per-derivation caching as well, which is finer-grained than Docker's layers. | ||