| ▲ | franga2000 2 days ago |
| It breaks the isolation for that one container, the rest are just fine. That's clearly done in order to dynamically spin up CI/CD containers, which you obviously can't do with something like compose. I get why you don't want to do that on a machine running other things and I wouldn't either, but you're pretending like this is such a strange, unnecessary and unexpected thing to require, when in reality, basically everything does it this way and there isn't really a good alternative without a ton of additional complexity the vast majority of people won't need. |
|
| ▲ | hamdingers 2 days ago | parent | next [-] |
| > It breaks the isolation for that one container, the rest are just fine. Wrong. A container with access to the socket can compromise any other container, and start new containers with privileged access to the host system. It compromises everything. This is a risk worth flagging. |
| |
| ▲ | mbreese 2 days ago | parent | next [-] | | If you want to be able to spin up CI/CD containers, don’t you kinda already need to have docker socket access? In that case, you’ve already decided that this isn’t a threat vector you’re concerned about. Yes, this probably makes it easier, but the ability to startup new containers for CI/CD is what makes this threat possible. So, I’m not sure this is something I’d worry much about. Perhaps they should flag this in the documentation as something to be noted, but otherwise, I’m not sure how else you get this functionality. Is there another way? | | |
| ▲ | soraminazuki 2 days ago | parent | next [-] | | It's a multi-user Git / CI /CD / project management platform. If you introduce this in your organization, a single vulnerability can take down the entire system and any other application running on the same host. You can't just "decide that this isn’t a threat vector" without taking the use case into account. Or at least it should come with alarm bells warning users that it's unsafe. | | |
| ▲ | franga2000 2 days ago | parent [-] | | What is "entire system" here? I'd run something like that in a VM, so the "entire system" would be nothing but the app itself. If there is a RCE vuln in the app, your users are just as unsafe if it's running as root on the host or if it's running as nobody in a container. The valuable data is all inside. |
| |
| ▲ | hebocon 2 days ago | parent | prev [-] | | Running a binary as a non-root user with scoped access to Docker commands seems more appropriate to me. | | |
| ▲ | franga2000 2 days ago | parent [-] | | What do you mean by scoped access? A bunch of regexes checking that the app doesn't add any dangerous flags to docker run? That sounds like a fun CTF challenge to me, which is not a good thing for a security feature... |
|
| |
| ▲ | franga2000 2 days ago | parent | prev | next [-] | | Yes, that's exactly what I said. The container with the socket is not isolated, but all the other containers are, including the CI containers, which is the whole point. | | |
| ▲ | cassianoleal 2 days ago | parent [-] | | No containers, existing or potential, are isolated from the one with socket access. | | |
| ▲ | franga2000 2 days ago | parent [-] | | The code inside those containers is isolated, which is the whole point. Only the app or runner container has socket access, which it uses to create new containers without socket access and it runs user code in there. If your get RCE in the app/runner, you get RCE on the host, yes, no shit. But if you get RCE in any other container on the system, you're properly contained. | | |
| ▲ | hamdingers 2 days ago | parent [-] | | It appears you fundamentally don't understand what mounting the docker socket is doing. I'm sorry to give you homework but you need to go look it up to participate in this conversation. > The code inside those containers is isolated, which is the whole point. A container with socket access can replace code or binaries in any other container, read any containers volumes and environment variables, replace whole containers, etc. That does not meet any definition of "isolated" | | |
| ▲ | franga2000 2 days ago | parent [-] | | But those containers DON'T have socket access. ONE container has socket access, then it creates other containers WITHOUT socket access. Those containers ARE isolated. Since the untrusted (user provided) code runs in those, the setup is reasonably secure. An RCE in OneDev is an RCE on the host, but that's a completely different threat model. The important part is that user code is isolated, which it is. | | |
| ▲ | hamdingers 2 days ago | parent [-] | | > The important part is that user code is isolated, which it is. It isn't for the reasons I stated in previous comments, which you are unable to refute. Your dogged insistence to the contrary is bizarre. I hope you do not work in this area. | | |
| ▲ | franga2000 2 days ago | parent [-] | | I actually don't know who is misunderstanding who here. I work with containers daily and this is how I understand this situation: The runner (trusted code) is tasked with taking job specifications from the user (untrusted code) and running them in isolated environments. Correct? The runner is in a container with a mounted docker socket. It sends a /containers/create request to the socket. It passes a base image, some resource limits and maybe a directory mount for the checked out repository (untrusted code). The code could alternatively be copied instead of mounted. Correct? The new container is created by dockerd without the socket mounted, because that wasn't specified by the runner ("Volumes": [] or maybe ["/whatever/user/repo/:/repo/"]). Correct? The untrusted code is now executed inside that container. Because the container was created with no special mounts or privileges, it is as isolated as if it was created manually with docker run. Correct? The job finishes executing, the runner uses the socket to collect the logs and artifacts, then it destroys the container. Correct? So please tell me how you think untrusted code could get access to the socket here? | | |
|
|
|
|
|
| |
| ▲ | fluidcruft 2 days ago | parent | prev [-] | | Would rootless docker help? (Potentially even running that specific workflow with it's own dedicated user) |
|
|
| ▲ | soraminazuki 2 days ago | parent | prev | next [-] |
| It's poor security practice that shouldn't be overlooked. Mounting the Docker socket effectively allows the entire application to run with root privileges on the host. Given that this seems to be a multi-tenant application, the implications are even more concerning. The component responsible for spinning up CI/CD containers shouldn't operate within the security boundary of the rest of the application. On a related note, I believe Docker's design contributes to this issue. Its inflexible sandboxing model encourages such risky practices. |
| |
| ▲ | soraminazuki 2 days ago | parent | next [-] | | Apparently multiple people were triggered by the idea that their organization's Git forge, CI / CD, and project management shouldn't be a single system running as root. I can't fathom why. | |
| ▲ | franga2000 2 days ago | parent | prev [-] | | No shit, I don't know why everyone is trying to explain Docker basics to me. All I'm saying is that socket access is required to spin up containers and it's nothing out of the ordinary for this use case. Of course it's an issue if you're using Docker to isolate OneDev from the rest of the apps running on your systems. But that's not everyone's use-case. Anything that intentionally spins up user-controlled containers should be isolated in a VM. That's how every sane person runs GitLab runners, for example. |
|
|
| ▲ | worldsayshi 2 days ago | parent | prev [-] |
| Isn't kaniko designed to solve this? |
| |
| ▲ | franga2000 2 days ago | parent [-] | | As far as I know kaniko handles the "I'm a CI job inside a container and I want to build a container image" part. The reason CI/CD runners need socket access is to create those job containers in the first place. Using Podman to create job containers inside the app Docker container would be a solution, but Podman containers have many subtle incompatibilities with Docker and its ecosystem, so it makes sense they wouldn't want to use that, at least by default. |
|