| ▲ | ofrzeta 4 days ago |
| Suse Linux Enterprise still uses Btrfs as the Root-FS, so it can't be that bad, right? What is Chris Mason actually doing these days? I did some googling and only found out that he was working on a tool called "rsched". |
|
| ▲ | yjftsjthsd-h 4 days ago | parent | next [-] |
| I used btrfs a few years ago, on OpenSUSE, because I also thought that would work, and it was on a single disk. It lost my root filesystem twice. |
| |
|
| ▲ | dmm 4 days ago | parent | prev | next [-] |
| btrfs is fine for single disks or mirrors. In my experience, the main advantages of zfs over btrfs is that ZFS has production ready raid5/6 like parity modes and has much better performance for small sync writes, which are common for databases and hosting VM images. |
| |
| ▲ | riku_iki 4 days ago | parent | next [-] | | > has much better performance for small sync writes I spent some time researching this topic, and in all benchmarks I've seen and my personal tests btrfs is faster or much faster: https://www.reddit.com/r/zfs/comments/1i3yjpt/very_poor_perf... | | |
| ▲ | dmm 3 days ago | parent [-] | | Thanks for sharing! I just setup a fs benchmark system and I'll run your fio command so we can compare results. I have a question about your fio args though. I think "--ioengine=sync" and "--iodepth=16" are incompatible, in the sense that iodepth will only be 1. "Note that increasing iodepth beyond 1 will not affect synchronous ioengines"[1] Is there a reason you used that ioengine as opposed to, for example, "libaio" with a "--direct=1" flag? [1] https://fio.readthedocs.io/en/latest/fio_doc.html#cmdoption-... | | |
| ▲ | riku_iki 3 days ago | parent [-] | | Intuition is that majority of software uses standard sync FS api.. |
|
| |
| ▲ | m-p-3 3 days ago | parent | prev [-] | | Context: I mostly dealt with RAID1 in a home NAS setup A ZFS pool will remain available even in degraded mode, and correct me if I'm wrong but with BTRFS you mount the array through one of the volume that is part of the array and not the array itself.. so if that specific mounted volume happens to go down, the array becomes unavailable unmounted until you remount another available volume that is part of the array which isn't great for availability. I thought about mitigating that by making an mdadm RAID1 formatted with BTRFS and mount the virtual volume instwad, but then you lose the ability to prevent bit rot, since BTRFS lose that visibility if it doesn't manage the array natively. | | |
| ▲ | wtallis 3 days ago | parent [-] | | > with BTRFS you mount the array through one of the volume that is part of the array and not the array itself I don't think btrfs has a concept of having only some subvolumes usable. Either you can mount the filesystem or you can't. What may have confused you is that you can mount a btrfs filesystem by referring to any individual block device that it uses, and the kernel will track down the others. But if the one device you have listed in /etc/fstab goes missing, you won't be able to mount the filesystem without fixing that issue. You can prevent the issue in the first place by identifying the filesystem by UUID instead of by an individual block device. | | |
| ▲ | m-p-3 2 days ago | parent [-] | | > I don't think btrfs has a concept of having only some subvolumes usable. Either you can mount the filesystem or you can't. You can still mount the BTRFS array as degraded if you specify it during mount. But then this lead to some others issues like the missing data written while degraded will not be automatically be copied over without doing a scrub, while ZFS will resilver it automatically, etc > You can prevent the issue in the first place by identifying the filesystem by UUID instead of by an individual block device. I tried that, but all it does is select the first available block device during mount, so if that device goes down, the mount also goes down. |
|
|
|
|
| ▲ | xelxebar 3 days ago | parent | prev | next [-] |
| I've used btrfs for 5-ish years in the most mundane, default setup possible. However, in that time, I've had three instances of corruption across three different drives, all resulting in complete loss of the filesystem. Two of these were simply due to hard power failures, and another due to a flaky cpu. AFAIU, btrfs effectively absolves itself of responsibility in these cases, claiming the issue is buggy drive firmware. |
|
| ▲ | petre 3 days ago | parent | prev [-] |
| We use OpenSuSE and I always switch the installs to ext4. No fancy features, but always works, doesn't lose my root fs. |