Remix.run Logo
digdugdirk 12 hours ago

Can you explain/link to why you can't really use this without their cloud product? I'm not seeing anything at a glance, and this looks useful for a project of mine, but I don't want to be trapped by limitations that I only find out about after putting in weeks of work

themgt 12 hours ago | parent | next [-]

Overall I like Dagger conceptually, but I wish they'd start focusing more on API stability and documentation (tbf it's not v1.0). v0.19 broke our Dockerfile builds and I don't feel like figuring out the new syntax atm. Having to commit dev time to the upgrade treadmill to keep CI/CD working was not the dream.

re: the cloud specifically see these GitHub issues:

https://github.com/dagger/dagger/issues/6486

https://github.com/dagger/dagger/issues/8004

Basically if you want consistently fast cached builds it's a PITA and/or not possible without the cloud product, depending on how you set things up. We do run it self-hosted though, YMMV.

pxc 10 hours ago | parent [-]

One thing that I liked about switching from a Docker-based solution like Dagger to Nix is that it relaxed the infrastructure requirements to getting good caching properties.

We used Dagger, and later Nix, mostly to implement various kinds of security scans on our codebases using a mix of open-source tools and clients for proprietary ones that my employer purchases. We've been using Nix for years now, and still haven't set up any of our own binary cache. But we still have mostly-cached builds thanks to the public NixOS binary cache, and we hit that relatively sparingly because we run those jobs on bare metal in self-hosted CI runners. Each scan job typically finishes in less than 15 seconds once the cache is warm, and takes up to 3 minutes when the local cache is cold (in case we build a custom dependency).

Some time in the next quarter or two I'll finish our containerization effort for this so that all the jobs on a runner will share a /nix/store and Nix daemon socket bind-mounted from the host, so we can have relatively safe "multi-tenant" runners where all jobs run under different users in rootless Podman containers while still sharing a global cache for all Nix-provided dependencies. Then we get a bit more isolation and free cleanup for all our jobs but we can still keep our pipelines running fast.

We only have a few thousand codebases, so a few big CI boxes should be fine, but if we ever want to autoscale down, it should be possible to convert such EC2 boxes into Kubernetes nodes, which would be a fun learning project for me. Maybe we could get wider sharing that way and stand up fewer runner VMs.

Somewhere on my backlog is experimenting with Cachix, so we should get per-derivation caching as well, which is finer-grained than Docker's layers.

shykes 8 hours ago | parent | prev [-]

Hi, I'm the founder of Dagger. It's not true that you can't use Dagger without our cloud offering. At the moment our only commercial product is observability for your Dagger pipelines. It's based on standard otel telemetry emitted by our open source engine. It's completely optional.

If you have questions about Dagger, I encourage you to join our Discord server, we will be happy to answer them!