| ▲ | forrestthewoods 12 hours ago | |||||||||||||||||||||||||||||||
> The article is about producing container images for deployment Fair. Docker does trigger my predator drive. I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure. > What specifically is very very wrong with Linux packaging and dependency resolution? Linux userspace for the most part is built on a pool of global shared libraries and package managers. The theory is that this is good because you can upgrade libfoo.so just once for all programs on the system. In practice this turns into pure dependency hell. The total work around is to use Docker which completely nullifies the entire theoretic benefit. Linux toolchains and build systems are particularly egregious at just assuming a bunch of crap is magically available in the global search path. Docker is roughly correct in that computer programs should include their gosh darn dependencies. But it introduces so many layers of complexity that are solved by adding yet another layer. Why do I need estargz?? If you’re going to deploy with Docker then you might as well just statically link everything. You can’t always get down to a single exe. But you can typically get pretty close! | ||||||||||||||||||||||||||||||||
| ▲ | dilyevsky 12 hours ago | parent [-] | |||||||||||||||||||||||||||||||
> I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure. Not every dependency in Bazel requires you to "first invent the universe" locally. Lots of examples of this like toolchains, git_repository, http_archive rules and on and on. As long as they are checksum'ed (as they are in this case) so that you can still output a reproducible artifact, I don't see the problem | ||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||