Remix.run Logo
vbezhenar 3 days ago

Docker is a genius idea which looks obvious in retrospect, but someone need to invent it.

Docker is more than just chroot. You also need: overlay file system; OCI registry and community behind it, to create thousands of useful images. And, of course, the whole idea of creating images layer by layer and using immutable images to spawn mutable containers.

I don't actually think that you need network or process isolation. In terms of isolation, chroot is enough for most practical needs. Network and process isolations are nice to have, but they are not essential.

harrall 2 days ago | parent | next [-]

I was a very early adopter of Docker and what sold me was Dockerfiles.

A SINGLE regular text file that took regular shell commands and could build the same deployment from scratch every time and then be cleaned up in one command.

This was UNHEARD of. Every other solution required learning new languages, defining “modules,” creating sets of scripts, or doing a lot of extra things. None of that was steezy.

I was so sold on Dockerfiles that I figured that even if the Docker project died, my Dockerfiles would continue to live because other people would try copy the idea of Dockerfiles. Now it’s been 10 years and Docker and containerization has changed a lot but what hasn’t? Dockerfiles. My 10 year Dockerfiles are still valid. That’s how good they were.

akdev1l 3 days ago | parent | prev | next [-]

network isolation is very important too, that’s what lets people run 4 containers all listening on port 80

process isolation is less prominent

vbezhenar 3 days ago | parent | next [-]

You can bind your application to 127.0.0.2 for one container and to 127.0.0.3 for another container. Both can listen on port 80 and both can communicate with each other. And you can run another container, binding to 1.2.3.4:80 and using it as reverse-router. You can use iptables/nftables to prevent undesired connections and manually (or with some scripting) crafted /etc/hosts for named hosts to point to those loopback addresses. Or just DNS server. It's all doable.

The only thing that you need is the ability to configure a target application to choose address to bind to. But any sane application have that configuration knob.

Of course things are much easier with network namespaces, but you can go pretty far with host network (and I'd say it might be easier to understand and manage).

cbluth 2 days ago | parent [-]

You can see why people like the docker experience, you can manage to do all that in a single interface, instead of one off scripts touching a ton of little things

mikepurvis 3 days ago | parent | prev | next [-]

Process isolation is more about load management/balancing, which is more of a production concern than a development one.

huflungdung 2 days ago | parent | prev [-]

[dead]

tguvot 2 days ago | parent | prev | next [-]

i tried to build at work something like docker around 2003-2004. was trying to solve problem of distribution/updates/rollblacks of software on network appliances that we made. overlay filesystems back then were immature/buggy so it went nowhere. loopback mounted system was not sufficient (don't remember why)

lyu07282 3 days ago | parent | prev [-]

What I always wondered is why qcow2 + qemu never gave rise to a similar system, they support snapshots/backing-files so it should be possible to implement a system similar to docker? Instead what we got is just this terrible libvirt.

dboreham 2 days ago | parent | next [-]

We called it "VMware".

everfrustrated 2 days ago | parent | prev | next [-]

The short answer is docker concentrated on files, whereas other VM oriented tech concentrated on block devices.

Dockers is conceptually simpler for devs and the layer use case but has huge performance issues which is why it never went anywhere for non-docker classic IT type use cases.

westurner 2 days ago | parent | prev [-]

Containerd/nerdctl supports a number of snapshotter plugins: Nydus, e Stargz, SOCI: Seekable OCI, fuse-overlayfs;

containerd/stargz-snapshotter: https://github.com/containerd/stargz-snapshotter

containerd/nerdctl//docs/nydus.md: https://github.com/containerd/nerdctl/blob/main/docs/nydus.m... :

nydusify and Check Nydus image: https://github.com/dragonflyoss/nydus/blob/master/docs/nydus... :

> Nydusify provides a checker to validate Nydus image, the checklist includes image manifest, Nydus bootstrap, file metadata, and data consistency in rootfs with the original OCI image. Meanwhile, the checker dumps OCI & Nydus image information to output (default) directory.

nydus: https://github.com/dragonflyoss/nydus

awslabs/soci-snapshotter: https://github.com/awslabs/soci-snapshotter ; lazy start standard OCI images

/? lxc copy on write: https://www.google.com/search?q=lxc+copy+on+write : lxc-copy supports btrfs, zfs, lvm, overlayfs

lxc/incus: "Add OCI image support" https://github.com/lxc/incus/issues/908

opencontainers/image-spec; OCI Image spec: https://github.com/opencontainers/image-spec

opencontainers/distribution-spec; OCI Image distribution spec: https://github.com/opencontainers/distribution-spec

But then in the

opencontainers/runtime-spec//config.md OCI runtime spec TODO bundle config.json there is an example of a config.json https://github.com/opencontainers/runtime-spec/blob/main/con...

The LXC approach is to run systemd in the container.

The quadlet approach is to not run systemd /sbin/init in the container; instead create .container files in /etc/containers/systemd/ (rootful) or ~/.config/containers/systemd/*.container (for rootless) so that the host systemd manages and logs the container processes.

Then realized you said QEMU not LXC.

LXD: https://canonical.com/lxd :

> LXD provides both [QEMU,] KVM-based VMs and system containers based on LXC – that can run a full Linux OS – in a single open source virtualisation platform. LXD has numerous built-in management features, including live migration, snapshots, resource restrictions, projects and profiles, and governs the interaction with various storage and networking options.

From https://documentation.ubuntu.com/lxd/latest/reference/storag... :

> LXD supports the following storage drivers for storing images, instances and custom volumes:

> Btrfs, CephFS, Ceph Object, Ceph RBD, Dell PowerFlex, Pure Storage, HPE Alletra, Directory, LVM, ZFS

You can run Podman or Docker within an LXD host; with or without a backing storage pool. FWIU it's possible for containers in an LXD VM to use BTRFS, ZFS, or KVM storage drivers to create e.g. BTRFS subvolumes instead of running overlayfs within the VM by editing storage.conf.