| ▲ | Oxodao 6 hours ago |
| Docker already fills up my dev machines yet they decided for this insane solution: > The containerd image store uses more disk space than the legacy storage drivers for the same images. This is because containerd stores images in both compressed and uncompressed formats, while the legacy drivers stored only the uncompressed layers. Why ? |
|
| ▲ | black3r 20 minutes ago | parent | next [-] |
| Also this doesn't just mean more disk space usage, but also longer local build times... for the app I'm working on `exporting to image` takes 71.5 seconds with containerd, without containerd it's 4.3s (the rest of the build takes ~180 seconds). And that's just a 5.76GB image. |
|
| ▲ | giobox 4 hours ago | parent | prev | next [-] |
| > https://docs.docker.com/reference/cli/docker/system/prune/ Just in case - I'm always amazed how many Docker users don't know about the prune command for cleaning up the caches and deleting unused container images and just slowly let their docker image cache eat their disk. |
| |
| ▲ | johannes1234321 3 hours ago | parent [-] | | Prune is nice, but if you have a bunch of containers which run shirt time for a build step or similar prune would collect those, too. A filter "last used a few months ago" would be useful. | | |
|
|
| ▲ | ElevenLathe 6 hours ago | parent | prev | next [-] |
| Sounds like a straightforward time-space tradeoff: if you have the compressed layers sitting around when you need them, you can avoid the expense and time of compressing them. |
| |
| ▲ | Filligree 5 hours ago | parent | next [-] | | Why would I need the compressed layers? | | |
| ▲ | XYen0n an hour ago | parent | next [-] | | The OCI manifest references the hashes of these compressed layers, and re-compressing them does not guarantee obtaining the same hash | | |
| ▲ | flakes 18 minutes ago | parent [-] | | Recompressing should be guaranteed deterministic. It’s the packing/unpacking of tar archives to/from directories on disk that leads to the non-determinism (such as timestamps and ownership metadata). If the tar is left intact, both zstd and gzip should produce byte for byte identical outputs given the same compression parameters. |
| |
| ▲ | NewJazz 4 hours ago | parent | prev | next [-] | | Pushing | |
| ▲ | cryptonym 4 hours ago | parent | prev [-] | | To save disk space
/s |
| |
| ▲ | 6 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | colechristensen 5 hours ago | parent | prev | next [-] | | I'm not sure about the fastest macbook disk access, but even with NVMe storage I've found lz4 to be faster than the disk. That is (it's hard to say this exactly correct) compressed content gets read/written FASTER than uncompressed content because fewer bytes need to transit the disk interface and the CPU is able to compress/decompress significantly faster than data is able to go through whatever disk bus you've got. | | |
| ▲ | fpoling 4 hours ago | parent [-] | | On my 2 years old ThinkPad laptop SSD is faster than lz4. On a fat EC2 server lz4 is faster. So one really has to test a particular config. | | |
| ▲ | colechristensen 3 hours ago | parent [-] | | Yeah, I'm not surprised the PCIe 5.0 transfer speeds matched with top tier SSD chips win that race. It still bothers me that the fastest most performant computer I have access to is almost always my laptop, and that by a considerable margin. Someone should do some lz4 vs. ssd benchmarks across hardware to make my argument more solid and the boundaries clear. |
|
| |
| ▲ | freedomben 5 hours ago | parent | prev | next [-] | | did you mean the first "compressed" to be "uncompressed" ? | |
| ▲ | awestroke 3 hours ago | parent | prev [-] | | But if it stores the uncompressed layers, why store the compressed ones too? Why both at the same time? |
|
|
| ▲ | sschueller 5 hours ago | parent | prev [-] |
| [flagged] |
| |
| ▲ | stingraycharles 4 hours ago | parent | next [-] | | What does Apple have to do with any of this? | |
| ▲ | mschuster91 5 hours ago | parent | prev [-] | | > It is shameful for apple to hard solder their disks. There is no benefit to the user Actually, it is. The speed and latency difference does matter, that is how even an 8GB RAM MacBook feels snappier than many a 32GB Windows machine - it can use the disk as swap. | | |
| ▲ | giobox 3 hours ago | parent | next [-] | | This explanation for the soldered in SSD on some models has never fully made sense, because Apple make computers with removable fast SSDs right now: the M4 Mac Mini, and their range topping Mac Studios. I absolutely agree Apple typically ship a fast SSD in their computers. I am not convinced they had to solder them to achieve the performance. | |
| ▲ | newsoftheday 4 hours ago | parent | prev [-] | | I had to work on a Mac M3 for a year, it sucked, it did not feel snappier than any Windows or Linux machine (including this one) that I've ever used and that is going back to the 1980's. | | |
| ▲ | stingraycharles 4 hours ago | parent [-] | | I suggest you judge based on benchmarks rather than vibes. If you believe the latest M3 does not perform better than machines you’ve used in the 80s, I have no idea how to even start a reasonable discussion about this. | | |
| ▲ | newsoftheday 2 hours ago | parent [-] | | > If you believe the latest M3 does not perform better than machines you’ve used in the 80s That wasn't what I was trying to say, I apologize, I should have been clearer. What I intended to say was that I've been using various, many computers since the 1980's so I have a wide and deep sampling of experiences with them and to that end...the M3 did NOT feel to me like it performed better. Regardless the benchmarks, I know how the machine should feel and I know M3 did not feel any better than any other machine I've used (and that is a lot of laptops). |
|
|
|
|