| ▲ | cpuguy83 6 hours ago |
| Absolutely none of this is true.
Docker had support contracts (Docker EE... and trying to remember, docker-cs before that naming pivot?). Corporate customers do not care about any of the things you mentioned. I mean, maybe some, but in general no. That's not what corps think about. There was never "no interest" at Docker in cgv2 or rootless.
Never.
cgv2 early on was not useable. It lacked so much functionality that v1 had.
It also didn't buy much, particularly because most Docker users aren't manually managing cgroups themselves. Docker literally sold a private registry product. It was the first thing Docker built and sold (and no, it was not late, it was very early on). |
|
| ▲ | djb_hackernews 5 hours ago | parent | next [-] |
| for the record, cpuguy83 was in the trenches at docker circa 2013, it was like him a handful of other people working on docker when it went viral, he has an extremely insiders perspective, I'd trust what he says. |
|
| ▲ | FireBeyond 6 hours ago | parent | prev | next [-] |
| I mean you can say that, but on the topic of rootless, regardless of "interest" at Docker, they did nothing about it. I was at Red Hat at the time, a PM in the BU that created podman, and Docker's intransigence on rootless was probably the core issue that led to podman's creation. |
| |
| ▲ | cpuguy83 5 hours ago | parent | next [-] | | That's true, we didn't do much around it. Small startup with monetization problems and all. | | |
| ▲ | jeremyjh 5 hours ago | parent [-] | | So absolutely at least some of that is true. I’d be surprised if the systemd thing was not also true. I think it’s quite likely Docker did not have a good handle on the “needs” of the enterprise space. That is Red Hats bread and butter; are you saying they developed all of that for no reason? |
| |
| ▲ | mikepurvis 6 hours ago | parent | prev [-] | | I've really appreciated RH's work both on podman/buildah and in the supporting infrastructure like the kernel that enables nesting, like using buildah to build an image inside a containerized CI runner. That said, I've been really surprised to not see more first class CI support for a repo supplying its own Dockerfile and being like "stage 1 is to rebuild the container", "stage two is a bunch of parallel tests running in instances of the container". In modern Dockerfiles it's pretty easy to avoid manual cache-busting by keying everything to a package manager lockfile, so it's annoying that the default CI paradigm is still "separate job somewhere that rebuilds a static base container on a timer". | | |
| ▲ | FireBeyond 5 hours ago | parent [-] | | Yeah, I've moved on from there, but I agree. There wasn't a lot of focus on the CI side of things beyond the stuff that ArgoCD was doing, and Shipwright (which isn't really CI/CD focused but did some stuff around the actual build progress, but really suffered failure to launch). | | |
| ▲ | mikepurvis 4 hours ago | parent [-] | | My sense is that a lot of the container CI space just kind of assumes that every run starts from nothing or a generic upstream-supplied "stack:version" container and installs everything every time. And that's fine if your app is relatively small and the dependency footprint is, say, <1GB. But if that's not the case (robotics, ML, gamedev, etc) or especially if you're dealing with a slow, non-parallel package manager like apt, that upfront dependency install starts to take up non-trivial time— particularly galling for a step that container tools are so well equipped to cache away. I know depot helps a bunch with this by at least optimizing caching during build and ensuring the registry has high locality to the runner that will consume the image. |
|
|
|
|
| ▲ | oblio 2 hours ago | parent | prev [-] |
| I've worked in build/release engineering/devops for a long time. I would be utterly shocked if corporate customers wouldn't want corporate Docker proxies/caches/mirrors. Entire companies have been built on language specific artifact repositories. Generic ones like Docker are even more sought after. |
| |