| |
| ▲ | RadiozRadioz 9 hours ago | parent | next [-] | | If you're the one building the image, rebuild with newer versions of constituent software and re-create. If you're pulling the image from a public repository (or use a dynamic tag), bump the version number you're pulling and re-create. Several automations exist for both, if you're into automatic updates. To me, that workflow is no more arduous than what one would do with apt/rpm - rebuild package & install, or just install. How does one do it on nix? Bump version in a config and install? Seems similar | | |
| ▲ | mixedCase 36 minutes ago | parent [-] | | Now do that for 30 services and system config such as firewall, routing if you do that, DNS, and so on and so forth. Nix is a one stop shop to have everything done right, declaratively, and with an easy lock file, unlike Docker. Doing all that with containers is a spaghetti soup of custom scripts. |
| |
| ▲ | teekert 7 hours ago | parent | prev | next [-] | | Your understanding of containers is incorrect! Containers decouple programs from their state. The state/data live outside the container so the container itself is disposable and can be discarded and rebuild cheaply. Of course there need to be some provisions for when the state (ie schema) needs to be updated by the containerized software. But that is the same as for non-containerized services. I'm a bit surprised this has to be explained in 2025, what field do you work in? | | |
| ▲ | johannes1234321 2 hours ago | parent | next [-] | | It's not that easy. First I need to monitor all the dependencies inside my containers, which is half a Linux distribution in many cases. Then I have to rebuild and mess with all potential issues if software builds ... Yes, in the happy path it is just a "docker build" which updates stuff from a Linux distro repo and then builds only what is needed, but as soon as the happy path fails this can become really tedious really quickly as all people write their Dockerfiles differently, handle build step differently, use different base Linux distributions, ... I'm a bit surprised this has to be explained in 2025, what field do you work in? | | |
| ▲ | rkomorn 2 hours ago | parent [-] | | It does feel like one of the side effects of containers is that now, instead of having to worry about dependencies on one host, you have to worry about dependencies for the host (because you can't just ignore security issues on the host) as well as in every container on said host. So you go from having to worry about one image + N services to up-to-N images + N services. |
| |
| ▲ | zelphirkalt 2 hours ago | parent | prev | next [-] | | I think you are not too wrong about this. Just that state _can_ be outside the container, and in most cases should. It doesn't have to be outside the container. A process running in a container can also write files inside the container, in a location not covered by any mount or volume. The downside or upside of this is, that once you down your container, stuff is basically gone, which is why usually the state does live outside, like you are saying. | |
| ▲ | fijiaarone 3 hours ago | parent | prev [-] | | Your understanding of not-containers is incorrect. In non-containerized applications, the data & state live outside the application, store in files, database, cache, s3, etc. In fact, this is the only way containers can decouple programs from state — if it’s already done so by the application. But with containers you have the extra steps of setting up volumes, virtual networks, and port translation. But I’m not surprised this has to be explained to some people in 2025, considering you probably think that a CPU is something transmitted by a series of tubes from AWS to Vercel that is made obsolete by NVidia NFTs. |
| |
| ▲ | wwarek 10 hours ago | parent | prev | next [-] | | > How do you update the software in the containers when new versions come out or vulnerabilities are actively being exploited? You build new image with updated/patched versions of packages and then replace your vulnerable container with a new one, created from new image | | |
| ▲ | teekert 7 hours ago | parent [-] | | Am I the only one surprised that this is a serious discussion in 2025? | | |
| ▲ | AdrianB1 6 hours ago | parent [-] | | Perhaps. There are many people, even in the IT industry, that don't deal with containers at all; think about the Windows apps, games, embedded stuff, etc. Containers are a niche in the grand scheme of things, not the vast majority like some people assume. | | |
| ▲ | teekert 5 hours ago | parent [-] | | Really? I'm a biologist, just do some self hosting as a hobby, and need a lot of FOSS software for work. I have experienced containers as nothing other than pervasive. I guess my surprise is just stemming from the fact that I, a non CS person even knows containers and see them as almost unavoidable. But what you say sounds logical. | | |
| ▲ | fwip 2 hours ago | parent [-] | | Self-hosting and bioinformatics are both great use cases for containers, because you want "just let me run this software somebody else wrote," without caring what language it's in, or looking for rpms, etc etc. If you're e.g: a Java shop, your company already has a deployment strategy for everything you write, so there's not as much pressure to deploy arbitrary things into production. |
|
|
|
| |
| ▲ | corn13read2 6 hours ago | parent | prev [-] | | pull new container, stop old and start new. can also make immutable containers. |
|