Remix.run Logo
nine_k 12 hours ago

I see a number of assumptions in your post which I find not matching my view of the picture.

Containers arose as a way to solve the dependency problems created by traditional Unix. They grow from tools like chroot, BSD jails, and Solaris Zones. Containers allow to deploy dependencies that cannot be simultaneously installed on a traditional Unix host system. it's not a UNIX architecture limitation but rather a result of POSIX + tradition; e.g. Nix also solves this, but differently.

Containers (like chroot and jail before them) also help ensure that a running service does not depend on the parts of the filesystem it wasn't given access to. Additionally, containers can limit network access, and process tree access.

These limitations are not a proper security boundary, but definitely a dependency boundary, helping avoid spaghetti-style dependencies, and surprises like "we never realized that our ${X} depends on ${Y}".

Then, there's the Fundamental Theorem of Software Engineering [1], which states: "We can solve any problem by introducing an extra level of indirection." So yes, expect the number of levels of indirection to grow everywhere in the stack. A wise engineer can expect to merge or remove a some levels here and there, when the need for them is gone, but they would never expect that new levels of indirection should stop emerging.

[1]: https://en.wikipedia.org/wiki/Fundamental_theorem_of_softwar...

m132 12 hours ago | parent [-]

To be honest, I've read your response 3 times and I still don't see where we disagree, assuming that we do.

I've mostly focused on the worst Docker horrors I've seen in production, extrapolating that to the future of containers, as pulling in new "containerized" dependencies will inevitably become just as effortless as it currently is with regular dependencies in the new-style high-level programming languages. You've primarily described a relatively fresh, or a well-managed Docker deployment, while admitting that spaghetti-style dependencies have become a norm and new layers will pile up (and by extension, make things hard to manage).

I think our points of view don't actually collide.

nine_k 10 hours ago | parent [-]

We do not disagree about the essence, but rather in accents. Some might say that sloppy engineers were happy to pack their Ruby-Goldbergesque deployments into containers. I say that even the most excellent and diligent engineers sometimes faced situations when two pieces of software required incompatible versions of a shared library, which depended on a tree of other libraries with incompatible versions, etc, and there's a practical limit of what you can and should do with bash scripts and abuse of LD_PRELOAD.

Many of the "new" languages, like Go (16 years), Rust (13 years), or Zig (9 years) just can build static binaries, not even depending on libc. This has both upsides and downsides, especially with security fixes. Rebuilding a container to include an updated .so dependency is often easier and faster than rebuilding a Rust project.

Docker (or preferably Podman) is not a replacement for linkers. It's an augmentation to the package system, and a replacement for the common file system layout, which is inadequate for modern multi-purpose use of a Unix (well, Linux) box.

m132 10 hours ago | parent [-]

I see, you're providing a complementary perspective. I appreciate that, and indeed, Docker isn't always evil. My intention was to bring attention to the abuse of it and compare it to virtualization of unikernels, which to me appears to be on a similar trajectory.

As for the linker analogy, I compared docker-compose (not Docker proper) to a dynamic linker because it's often used to bring up larger multi-container applications, similar to how large monolithic applications with plenty of shared library dependencies are put together by ld.so, and those multi-container applications can be similarly brittle if developed under the assumption that merely wrapping them up in containers will assure portability, defeating most of Docker's advantages and reducing it to a pile of excess layers of indirection. This is similar to the false belief that running kernel-mode code under a hypervisor is by itself more secure than running it as process on top of a bare-metal kernel.

nine_k 4 hours ago | parent [-]

Indeed, the problem of the distributed monolith does exist. If it arises, a reasonable engineering leader would just migrate to a proper monolith: https://www.twilio.com/en-us/blog/developers/best-practices/...