Remix.run Logo
m132 12 hours ago

To be honest, I've read your response 3 times and I still don't see where we disagree, assuming that we do.

I've mostly focused on the worst Docker horrors I've seen in production, extrapolating that to the future of containers, as pulling in new "containerized" dependencies will inevitably become just as effortless as it currently is with regular dependencies in the new-style high-level programming languages. You've primarily described a relatively fresh, or a well-managed Docker deployment, while admitting that spaghetti-style dependencies have become a norm and new layers will pile up (and by extension, make things hard to manage).

I think our points of view don't actually collide.

nine_k 10 hours ago | parent [-]

We do not disagree about the essence, but rather in accents. Some might say that sloppy engineers were happy to pack their Ruby-Goldbergesque deployments into containers. I say that even the most excellent and diligent engineers sometimes faced situations when two pieces of software required incompatible versions of a shared library, which depended on a tree of other libraries with incompatible versions, etc, and there's a practical limit of what you can and should do with bash scripts and abuse of LD_PRELOAD.

Many of the "new" languages, like Go (16 years), Rust (13 years), or Zig (9 years) just can build static binaries, not even depending on libc. This has both upsides and downsides, especially with security fixes. Rebuilding a container to include an updated .so dependency is often easier and faster than rebuilding a Rust project.

Docker (or preferably Podman) is not a replacement for linkers. It's an augmentation to the package system, and a replacement for the common file system layout, which is inadequate for modern multi-purpose use of a Unix (well, Linux) box.

m132 10 hours ago | parent [-]

I see, you're providing a complementary perspective. I appreciate that, and indeed, Docker isn't always evil. My intention was to bring attention to the abuse of it and compare it to virtualization of unikernels, which to me appears to be on a similar trajectory.

As for the linker analogy, I compared docker-compose (not Docker proper) to a dynamic linker because it's often used to bring up larger multi-container applications, similar to how large monolithic applications with plenty of shared library dependencies are put together by ld.so, and those multi-container applications can be similarly brittle if developed under the assumption that merely wrapping them up in containers will assure portability, defeating most of Docker's advantages and reducing it to a pile of excess layers of indirection. This is similar to the false belief that running kernel-mode code under a hypervisor is by itself more secure than running it as process on top of a bare-metal kernel.

nine_k 4 hours ago | parent [-]

Indeed, the problem of the distributed monolith does exist. If it arises, a reasonable engineering leader would just migrate to a proper monolith: https://www.twilio.com/en-us/blog/developers/best-practices/...