Remix.run Logo
forrestthewoods 13 hours ago

Uhhh what? Isn’t the whole point of Bazel that it’s a monorepo with all dependencies so you don’t need effing docker just to build or run a bloody computer program?

It drives me absolute batshit insane that modern systems are incapable of either building or running computer programs without docker. Everyone should profoundly embarrassed and ashamed by this.

I’m a charlatan VR and gamedev that primarily uses Windows. But my deeply unpopular opinion is that windows is a significantly better dev environment and runtime environment because it doesn’t require all this Docker garbage. I swear that building and running programs does not actually have to be that complicated!! Linux userspace got pretty much everything related to dependencies and packages very very very wrong.

I am greatly pleased and amused that the most reliable API for gaming in Linux is Win32 via Proton. That should be a clear signal that Linux userspace has gone off the rails.

jakewins 12 hours ago | parent [-]

You’re converging a lot of ground here! The article is about producing container images for deployment, and have no relation to Bazels building stuff for you - if you’re not deploying as containers, you don’t need this?

On Linux vs Win32 flame warring: can you be more specific? What specifically is very very wrong with Linux packaging and dependency resolution?

forrestthewoods 12 hours ago | parent [-]

> The article is about producing container images for deployment

Fair. Docker does trigger my predator drive.

I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure.

> What specifically is very very wrong with Linux packaging and dependency resolution?

Linux userspace for the most part is built on a pool of global shared libraries and package managers. The theory is that this is good because you can upgrade libfoo.so just once for all programs on the system.

In practice this turns into pure dependency hell. The total work around is to use Docker which completely nullifies the entire theoretic benefit.

Linux toolchains and build systems are particularly egregious at just assuming a bunch of crap is magically available in the global search path.

Docker is roughly correct in that computer programs should include their gosh darn dependencies. But it introduces so many layers of complexity that are solved by adding yet another layer. Why do I need estargz??

If you’re going to deploy with Docker then you might as well just statically link everything. You can’t always get down to a single exe. But you can typically get pretty close!

dilyevsky 12 hours ago | parent [-]

> I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure.

Not every dependency in Bazel requires you to "first invent the universe" locally. Lots of examples of this like toolchains, git_repository, http_archive rules and on and on. As long as they are checksum'ed (as they are in this case) so that you can still output a reproducible artifact, I don't see the problem

carolosf 10 hours ago | parent | next [-]

Also it is possible to air gap bazel and provide files as long as they have the same checksum offline.

forrestthewoods 12 hours ago | parent | prev [-]

Everything belongs in version control imho. You should be able to clone the repo, yank the network cable, and build.

I suppose a URL with checksum is kinda sorta equivalent. But the article adds a bunch of new layers and complexity to avoid “downloading Cuda for the 4th time this week”. A whole lot of problems don’t exist if they binary blobs exist directly in the monorepo and local blob store.

It’s hard to describe the magic of a version control system that actually controls the version of all your dependencies.

Webdev is notorious for old projects being hard to compile. It should be trivial to build and run a 10+ year old project.

dilyevsky 11 hours ago | parent [-]

Making heavy use of mostly remote caches and execution was one of the original design goals of Blaze (Google's internal version) iirc in an effort to reduce build time first and foremost. So kind of the opposite of what you're suggesting. That said, fully air-gapped builds can still be achieved if you just host all those cache blobs locally.

forrestthewoods 11 hours ago | parent [-]

> So kind of the opposite of what you're suggesting.

I don’t think they’re opposites. It seems orthogonal to me.

If you have a bunch of remote execution workers then ideally they sit idle on a full (shallow) clone of the repo. There should be no reason to reset between jobs. And definitely no reason to constantly refetch content.