Remix.run Logo
cyberax 12 hours ago

I'm struggling with the caching right now. I'm trying to switch from the Github actions to just running stuff in containers, and it works. Except for caching.

Buildkit from Docker is just a pure bullshit design. Instead of the elegant layer-based system, there's now two daemons that fling around TAR files. And for no real reason that I can discern. But the worst thing is that the caching is just plain broken.

bmitch3020 6 hours ago | parent | next [-]

Buildkit can be very efficient at caching, but you need to design your image build around it. Once any step encounters a cache miss, all remaining steps will too.

I'd also avoid loading the result back into the docker daemon unless you really need it there. Buildkit can output directly to a registry, or an OCI Layout, each of which will maintain the image digest and support multi-platform images (admittedly, those problems go away with the containerd storage changes happening, but it's still an additional export/import that can be skipped).

All that said, I think caching is often the wrong goal. Personally, I want reproducible builds, and those should bypass any cache to verify each step always has the same output. Also, when saving the cache, every build caches every step, even if they aren't used in future builds. As a result, for my own projects, the net result of adding a cache could be slower builds.

Instead of catching the image build steps, I think where we should be spending a lot more effort is in creating local proxies of upstream dependencies, removing the network overhead of pulling dependencies on every build. Compute intensive build steps would still be slow, but a significant number of image builds could be sped up with a proxy at the CI server level without tuning builds individually.

cyberax 5 hours ago | parent [-]

> Buildkit can be very efficient at caching, but you need to design your image build around it.

Well, that's what I've been trying to do. And failing, because it simply doesn't work.

> I'd also avoid loading the result back into the docker daemon unless you really need it there.

I need Docker to provide me a reproducible environment to run lints, inspections, UI tests and so on. These images are quite massive. And because caching in Docker is broken, they were getting rebuilt every time we did a push.

Well. I switched to Podman and podman-compose. Now they do get cached, and the build time is within ~1 min with the help of the GHA cache.

And yes, my deployment builds are produced without any caching.

boronine 7 hours ago | parent | prev | next [-]

I went down this rabbit hole before, you have to ignore all the recommended approaches. The real solution is to have a build server with a global Docker install and a script to prune cache when the disk usage goes above a certain percentage. Cache is local and instant. Pushing and pulling cache images is an insane solution.

klysm 12 hours ago | parent | prev | next [-]

The layers are tar files, I’m confused what behavior you actually want that isn’t supported.

cyberax 12 hours ago | parent [-]

The original Docker (and the current Podman) created each layer as an overlay filesystem. So each layer was essentially an ephemeral container. If a build failed, you could actually just run the last successful layer with a shell and see what's wrong.

More importantly, the layers were represented as directories on the host system. So when you wanted to run something in the final container, Docker just needed to reassemble it.

Buildkit has broken all of it. Now building is done, essentially, in a separate system, the "docker buildx" command talks with it over a socket. It transmits the context, and gets the result back as an OCI image that it then needs to unpack.

This is an entirely useless step. It also breaks caching all the time. If you build two images that differ only slightly, the host still gets two full OCI artifacts, even if two containers share most of the layers.

It looks like their Bazel infrastructure optimized it by moving caching down to the file level.

cpuguy83 9 hours ago | parent [-]

Buildkit didn't break anything here except that it each individual build step is no longer exposed as a runnable image in docker. That was unfortunate, but you can actually have buildkit run a command in that filesystem these days, and buildx now even exposes a DAP interface.

Buldkit is far more efficient than the old model.

cyberax 8 hours ago | parent [-]

Buildkit is still a separate system, unlike the old builder. So you get that extra step of importing the result back.

And since it's a separate system, there are also these strange limitations. For example, I can't just cache pre-built images in an NFS directory and then just push them into the Buildkit context. There's simply no command for it. Buildkit can only pull them from a registry.

> Buldkit is far more efficient than the old model.

I've yet to see it work faster than podman+buildah. And it's also just plain buggy. Caching for multi-stage and/or parallel builds has been broken since the beginning. The Docker team just ignores it and closes the bugs: https://github.com/moby/buildkit/issues/1981 https://github.com/moby/buildkit/issues/2274 https://github.com/moby/buildkit/issues/2279

I understand why. I tried to debug it, and simply getting it running under a debugger is an adventure.

So far, I found that switching to podman+podman-compose is a better solution. At least my brain is good enough to understand them completely, and contribute fixes if needed.

paulddraper 9 hours ago | parent | prev [-]

Huh?

Each layer is a tarball.

So build your tarballs (concurrently!), and then add some metadata to make an image.

From your comment elsewhere it seems maybe you are expecting the docker build paradigm of running a container and snapshotting it at various stages.

That is messy and has a number of limitations — not the least of which is cross-compilation. Reproducibility being another. But in any case, that definitely not what these rules are trying to do.

cyberax 8 hours ago | parent [-]

I don't quite understand how it handles running binaries then. For example, I want to do `bash -c "ls -la /"`. How would it run this command? It needs to assemble the filesystem at this point in the build process.

I guess the answer for Bazel is "don't do this"? Docker handles cross-compilation by using emulators, btw.

paulddraper 6 hours ago | parent [-]

> “don’t do this”

Yes. The Bazel way use to produce binaries, files, directories, and then create an image “directly” from these.

Much as you would create a JAR or ZIP or DEB.

This is (1) fast (2) small and (3) more importantly reproducible. Bazel users want their builds to produce artifacts that are exactly the same, for a number of reasons. Size is also nice…do you really need ls and dozens of other executables in your containerized service?

Most Docker users don’t care about reproducibility. They’ll apt-get install and get one version today and another version tomorrow.

Good? Bad? That’s a value judgement. But Bazel users have fundamentally different objectives.

> emulators

Yeah emulators is the Docker solution for producing images of different architectures.

Since Bazel doesn’t run commands as a running container, it skips that consideration entirely.

cyberax 4 hours ago | parent [-]

> Size is also nice…do you really need ls and dozens of other executables in your containerized service?

Yeah, I do. For debugging mostly :(

> Most Docker users don’t care about reproducibility. They’ll apt-get install and get one version today and another version tomorrow.

Ubuntu has daily snapshots. Not great, but works reasonably well. I tried going down the Nix route, but my team (well, and also myself) struggled with it.

I'd love to have fully bit-for-bit reproducible builds, but it's too complicated with the current tooling. Especially for something like mobile iOS apps (blergh).