| ▲ | talkvoix 7 hours ago |
| A full decade since we took the 'it works on my machine' excuse and turned it into the industry standard architecture ('then we'll just ship your machine to production'). |
|
| ▲ | avsm 6 hours ago | parent | next [-] |
| (coauthor of the article here) Well, before Docker I used to work on Xen and that possible future of massive block devices assembled using Vagrant and Packer has thankfully been avoided... One thing that's hard to capture in the article -- but that permeated the early Dockercons -- is the (positive) disruption Docker had in how IT shops were run. Before that going to production was a giant effort, and 'shipping your filesystem' quickly was such a change in how people approached their work. We had so many people come up to us grateful that they could suddenly build services more quickly and get them into the hands of users without having to seek permission slips signed in triplicate. We're seeing the another seismic cultural shift now with coding agents, but I think Docker had a similar impact back then, and it was a really fun community spirit. Less so today with the giant hyperscalars all dominating, sadly, but I'll keep my fond memories :-) |
| |
| ▲ | throwawaypath 5 hours ago | parent | next [-] | | >massive block devices assembled using Vagrant and Packer has thankfully been avoided... Funny comment considering lightweight/micro-VMs built with tools like Packer are what some in the industry are moving towards. | | |
| ▲ | avsm 5 hours ago | parent [-] | | And those lightweight VM base images are possible because Docker applied a downward pressure on OS base image sizes! Alpine Linux doesn't get enough credit for this; in addition to being a great base image, it was also the first distro to prioritise fast and small image creation (Gentoo and Arch were small, but not fast to create). | | |
| ▲ | kgwgk 3 hours ago | parent [-] | | Maybe in that alternative future of massive block devices some downward pressure on image sizes would have been applied just the same. | | |
| ▲ | avsm 3 hours ago | parent [-] | | It's not as easy; a block device has to be bootable and so usually bundles a kernel (large). And because the filesystem inside is opaque, you can't do layering like Docker does easily via overlayfs and friends. libguestfs does a heroic job of making VM images easier to manipulate from code, but it's an uphill battle... |
|
|
| |
| ▲ | talkvoix 6 hours ago | parent | prev [-] | | Great point about coding agents! Back then, Docker gave us 'it works on my machine, let's ship the machine'. Now, AI agents are giving us 'I have no idea how this works, let's ship the prompt'. The early Docker community spirit really was legendary though—before every hyperscaler wrapped it in 7 layers of proprietary managed services. Thanks for the memories and the write-up! | | |
| ▲ | avsm 6 hours ago | parent [-] | | Thanks for the kind words! I've been prodding @justincormack to resurrect the single most fun OS unconference I've ever attended -- New Directions in Operating Systems (last held back in 2014). https://operatingsystems.io Some of those talks strangely make more sense today (e.g. Rump Kernels or unikernels + coding agents seems like a really good combination, as the agent could search all the way through the kernel layers as well). |
|
|
|
| ▲ | syncsynchalt 4 hours ago | parent | prev | next [-] |
| I see this take a lot but I'd argue what Docker did was to entice everyone to capture their build into a repeatable process (via a Dockerfile). "Ship your machine to production" isn't so bad when you have a ten-line script to recreate the machine at the push of a button. |
| |
| ▲ | lioeters 4 hours ago | parent [-] | | Exactly my feeling. Docker is "works on this machine" with an executable recipe to build the machine and the application. Newer better solutions like OCI-compliant tools will gradually replace Docker, but the paradigm shift has provided a lot of lasting value. | | |
| ▲ | Gigachad 2 hours ago | parent [-] | | Yeah docker codifies what the process to convert a base linux distro in to a working platform for the app actually is. Every company I've worked at that didn't use docker just has this tribal knowledge or an outdated wiki page on the steps you need to take to get something to work. Vs a dockerfile that exactly documents the process. |
|
|
|
| ▲ | chuckadams 6 hours ago | parent | prev | next [-] |
| It's the ultimate in static linking. Perhaps a question that should be asked is why that approach is so compelling? |
| |
| ▲ | blackcatsec 5 hours ago | parent [-] | | I question that as well, it's also why Go is extremely popular. Could it just be a pendulum swing back towards static linking? Wonder when some enterprising OSS dev will rebrand dynamic linking in the future... | | |
| ▲ | jfjasdfuw 4 hours ago | parent [-] | | CGO_ENABLED=0 is sigma tier. I don't care about glibc or compatibility with /etc/nsswitch.conf. look at the hack rust does because it uses libc: > pub unsafe fn set_var<K: AsRef<OsStr>, V: AsRef<OsStr>>(key: K, value: V) | | |
| ▲ | jcgl 2 hours ago | parent [-] | | > I don't care about glibc or compatibility with /etc/nsswitch.conf. So what do you do when you need to resolve system users? I sure hope you don't parse /etc/passwd, since plenty of users (me included) use other user databases (e.g. sssd or systemd-userdbd). | | |
| ▲ | cyberax an hour ago | parent [-] | | Most software doesn't need to resolve users. You also can always shell out to `id` if you need an occasional bit of metadata. |
|
|
|
|
|
| ▲ | redhanuman 6 hours ago | parent | prev | next [-] |
| the real trick was making "ship your machine" sound like best practice and ten years later we r doing the same thing with ai "it works in my notebook" jst became "containerize the notebook and call it a pipeline" the abstraction always wins because fixing the actual problem is just too hard. |
| |
| ▲ | zbentley 6 hours ago | parent | next [-] | | > fixing the actual problem is just too hard. I think it’s laziness, not difficulty. That’s not meant to be snide or glib: I think gaining expertise in how to package and deploy non-containerized applications isn’t difficult or unattainable for most engineers; rather, it’s tedious and specialized work to gain that expertise, and Docker allowed much of the field to skip doing it. That’s not good or bad per se, but I do think it’s different from “pre-container deployment was hard”. Pre-container deployment was neglected and not widely recognized as a specialty that needed to be cultivated, so most shops sucked at it. That’s not the same as “hard”. | | |
| ▲ | skydhash 5 hours ago | parent [-] | | It's not even laziness or expertise. A lot of people are against learning conventions. They want their way, meaning what works on their computer. That's why they like the current scope of package managers, docker, flatpack,... They can do what they want in the sandbox provided however nonsensical and then ship the whole thing. And it will break if you look at it the wrong way. |
| |
| ▲ | Bratmon 4 hours ago | parent | prev | next [-] | | I mean, walking through a door is easier than tearing down a wall, walking through it, and rebuilding the wall. That doesn't mean the latter is a good idea. | |
| ▲ | goodpoint 6 hours ago | parent | prev [-] | | ...while completely forgetting about security |
|
|
| ▲ | hwhshs 2 hours ago | parent | prev | next [-] |
| In 2002 I used to think why cant they package a website. These .doc installation instructions are insane! What a waste of someones time. I sort of had the problem in mind. Docker is the answer. Not clever enough to have inventer it. If I did I would probably have invented octopus deploy as I was a Microsoft/.NET guy. |
|
| ▲ | curt15 5 hours ago | parent | prev | next [-] |
| >'then we'll just ship your machine production' Minus the kernel of course. What is one to do for workloads requiring special kernel features or modules? |
| |
|
| ▲ | Skywalker13 4 hours ago | parent | prev | next [-] |
| Oh, thank you... I'm not alone... I'm so tired of seeing crappy containers with pseudo service management handled by Dockerfiles, used instead of proper and serious packaging like that of many venerable Linux distributions. |
|
| ▲ | forrestthewoods 6 hours ago | parent | prev [-] |
| Linux user space is an abject disaster of a design. So so so bad. Docker should not need to exist. Running computer programs need not be so difficult. |
| |
| ▲ | esafak 6 hours ago | parent [-] | | Who does it right? | | |
| ▲ | jjmarr 6 hours ago | parent | next [-] | | Nix and Guix. Good luck convincing people to switch! | | |
| ▲ | abacate 6 hours ago | parent | next [-] | | Trying to convince people usually makes any resistance worse. Using it, solving problems with it, and building a real community around it tend to make a much greater impact in the long run. | | |
| ▲ | NortySpock 6 hours ago | parent [-] | | Yeah, but if the problem you are solving is rare for most practitioners, effectively theoretical until it actually happens, then people won't switch until they get bit by that particular problem. |
| |
| ▲ | zbentley 6 hours ago | parent | prev [-] | | But they’re roughly the same paradigm as docker, right? My understanding of the Nix approach is that it’s still reproducing most of a user land/filesystem in a captive/separate/sandbox environment. Like, docker is using namespaces for more stuff, Nix has a heavier emphasis on reproducibility/determinism, but … they’re both still throwing in the towel on deploying directly on the underlying OS’s userland (unless you go all the way to nixOS) and shipping what amounts to a filesystem in a box, no? | | |
| ▲ | jjmarr 6 hours ago | parent [-] | | I daily drive NixOS. I don't have a global "userland". Packages are shipped from upstream and pull in the dependencies they need to function. That means unlike Gentoo, I've never dealt with a "slot conflict" where two packages want conflicting dependencies. And unlike Ubuntu, I have new versions of everything. Pick 2: share dependencies, be on the bleeding edge, or waste your time resolving conflicts. |
|
| |
| ▲ | jfjasdfuw 4 hours ago | parent | prev | next [-] | | Plan9 or Inferno. | |
| ▲ | forrestthewoods 6 hours ago | parent | prev | next [-] | | Windows is an order of magnitude better in this regard. | | |
| ▲ | vanviegen 6 hours ago | parent | next [-] | | It used to be, but only in cases where your distro doesn't just package whatever software you require. Nowadays I prefer Flatpak or AppImage over crappy custom Windows installers for those cases. They allow for sandboxing and reliable updating/deinstallation. | | |
| ▲ | skydhash 5 hours ago | parent [-] | | These days, I equate anything that ships via docker/flatpak first as built by someone that only care about their own computer, especially if the project is opensource. As soon as a library or a tool update, they usually rush to add a hard condition on it for no reason other than to be on the "bleeding edge". |
| |
| ▲ | robmusial 6 hours ago | parent | prev [-] | | And yet I'm constantly getting asked when we'll support Windows containers at my office. | | |
| ▲ | avsm 6 hours ago | parent | next [-] | | We've given up on native Windows containers in OCaml after trying to use them for our CI builds for many years. See https://www.tunbury.org/2026/02/19/obuilder-hcs/ for our recent switch to HCS instead. Compared to Linux containers, they're very much a second-class citizen in the Microsoft worldview of Docker. | |
| ▲ | forrestthewoods 5 hours ago | parent | prev [-] | | This is because your team doesn’t know how to ship software without using containers. If you have adopted a bad tool then people are likely to want the bad tool in more places. This is the opposite of a virtuous cycle and is a horrible form of tech debt. |
|
| |
| ▲ | whateverboat 6 hours ago | parent | prev [-] | | Windows. | | |
|
|