▲ | yjftsjthsd-h 4 days ago | |||||||||||||
> It's also much faster than ZFS at mounting a disk with a large number of filesystems (=subvolumes), which is critical for building certain types of fileservers at scale. Now you've piqued my curiosity; what uses that many filesystems/subvolumes? (Not an attack; I believe you, I'm just trying to figure out where it comes up) | ||||||||||||||
▲ | williamstein 4 days ago | parent | next [-] | |||||||||||||
It can be useful to create a file server with one filesystem/subvolume per user, because each user has their own isolated snapshots, backups via send/recv are user-specific, quotas are easier, etc. If you only have a few hundred users, ZFS is fine. But what if you have 100,000 users? Then just doing "zpool import" would take hours, whereas mounting a btrfs filesystem with 100,000 subvolumes takes a seconds. This complexity difference was a show stopper for me to architect a certain solution on top of ZFS, despite me personally loving ZFS and having used it for a long time. The btrfs commands and UX are really awkward (for me) compared to ZFS, but btrfs is extremely efficient at some things where ZFS just falls down. The main criticism in this thread about btrfs involves multidisk setups, which aren't relevant for me, since I'm working on cloud systems and disk storage is abstracted away as a single block device. | ||||||||||||||
| ||||||||||||||
▲ | yencabulator 4 days ago | parent | prev [-] | |||||||||||||
As far as I understand, a core use case at Meta was build system workers starting with prepopulated state and being able to quickly discard the working tree at the end of the build. CoW is pretty sweet for that. |