| ▲ | FireBeyond 6 hours ago |
| I mean you can say that, but on the topic of rootless, regardless of "interest" at Docker, they did nothing about it. I was at Red Hat at the time, a PM in the BU that created podman, and Docker's intransigence on rootless was probably the core issue that led to podman's creation. |
|
| ▲ | cpuguy83 5 hours ago | parent | next [-] |
| That's true, we didn't do much around it. Small startup with monetization problems and all. |
| |
| ▲ | jeremyjh 5 hours ago | parent [-] | | So absolutely at least some of that is true. I’d be surprised if the systemd thing was not also true. I think it’s quite likely Docker did not have a good handle on the “needs” of the enterprise space. That is Red Hats bread and butter; are you saying they developed all of that for no reason? |
|
|
| ▲ | mikepurvis 6 hours ago | parent | prev [-] |
| I've really appreciated RH's work both on podman/buildah and in the supporting infrastructure like the kernel that enables nesting, like using buildah to build an image inside a containerized CI runner. That said, I've been really surprised to not see more first class CI support for a repo supplying its own Dockerfile and being like "stage 1 is to rebuild the container", "stage two is a bunch of parallel tests running in instances of the container". In modern Dockerfiles it's pretty easy to avoid manual cache-busting by keying everything to a package manager lockfile, so it's annoying that the default CI paradigm is still "separate job somewhere that rebuilds a static base container on a timer". |
| |
| ▲ | FireBeyond 5 hours ago | parent [-] | | Yeah, I've moved on from there, but I agree. There wasn't a lot of focus on the CI side of things beyond the stuff that ArgoCD was doing, and Shipwright (which isn't really CI/CD focused but did some stuff around the actual build progress, but really suffered failure to launch). | | |
| ▲ | mikepurvis 4 hours ago | parent [-] | | My sense is that a lot of the container CI space just kind of assumes that every run starts from nothing or a generic upstream-supplied "stack:version" container and installs everything every time. And that's fine if your app is relatively small and the dependency footprint is, say, <1GB. But if that's not the case (robotics, ML, gamedev, etc) or especially if you're dealing with a slow, non-parallel package manager like apt, that upfront dependency install starts to take up non-trivial time— particularly galling for a step that container tools are so well equipped to cache away. I know depot helps a bunch with this by at least optimizing caching during build and ensuring the registry has high locality to the runner that will consume the image. |
|
|