Remix.run Logo
mort96 3 hours ago

Interesting!

It doesn't seem to be only about overwriting, I can be in a directory without any .zst files and run the command to compress 55 files in parallel and it's still 45M according to 'du -h'. But you're right, 'wc -c' shows 38809999 bytes regardless of whether 'du -h' shows 45M after a parallel compression or 38M after a sequential compression.

My mental model of 'du' was basically that it gives a size accurate to the nearest 4k block, which is usually accurate enough. Seems I have to reconsider. Too bad there's no standard alternative which has the interface of 'du' but with byte-accurate file sizes...

terrelln 2 hours ago | parent [-]

Yeah, it isn't quite that simple. E.g. `/bin/ksh` reports 1.4MB, but it is actually 2.4MB. Initially, I thought it was because the file was sparse, but there are only 493KB of zeros. So something else is going on. Perhaps some filesystem-level blocks are deduped from other files? Or APFS has transparent compression? I'm not sure.

It does still seem odd that APFS is reporting a significantly larger disk-size for these files. I'm not sure why that would ever be the case, unless there is something like deferred cleanup work.

mort96 43 minutes ago | parent [-]

Ross Burton on Mastodon suggests that it might be deduplication; when writing sequentially, later files can re-use blocks from earlier files, while that isn't the case as much when writing sequentially. That seems plausible enough to me.

mort96 3 minutes ago | parent [-]

I've concluded that this can't be the reason. It'd only result in an error where the size reported by 'du' is smaller than the apparent size (aka number of bytes reported by 'wc -c') of the file. What we see here is that the size reported by 'du' is almost twice as large as the number of bytes. That can't be the result of dedpulication.

I'll chalk it up to "some APFS weirdness".