| ▲ | LeoPanthera 4 days ago |
| It's sort of frustrating that this constantly comes up. It's true that btrfs does have issues with RAID-5 and RAID-6 configurations, but this is frequently used (not necessarily by you) as some kind of gotcha as to why you shouldn't use it at all. That's insane. I promise that disk spanning issues won't affect your use of it on your tiny ThinkPad SSD. It's important to note that striping and mirroring works just fine. It's only the 5/6 modes that are unstable: https://btrfs.readthedocs.io/en/stable/Status.html#block-gro... |
|
| ▲ | betaby 4 days ago | parent | next [-] |
| But RAID-6 is the closest approximation to raid-z2 from ZFS! And raid-z2 is stable for a decade+. Indeed btrfs works just fine on my laptop. My point is that Linux lacks ZFS-like fs for large multi disc setups. |
| |
| ▲ | NewJazz 4 days ago | parent [-] | | Seriously for the people who take filesystems seriously and have strong preferences... Multi disk might be important. | | |
| ▲ | wtallis 4 days ago | parent [-] | | BTRFS does have stable, usable multi-disk support. The RAID 0, 1, and 10 modes are fine. I've been using BTRFS RAID1 for over a decade and across numerous disk failures. It's by far the best solution for building a durable array on my home server stuffed full of a random assortment of disks—ZFS will never have the flexibility to be useful with mismatched capacities like this. It's only the parity RAID modes that BTRFS lacks, and that's a real disadvantage but is hardly the whole story. | | |
| ▲ | Filligree 3 days ago | parent [-] | | That’s nice and all, but I have five disks in my server. I want the 6 mode. In practice RAIDZ2 works great. | | |
| ▲ | wtallis 3 days ago | parent [-] | | In the case of five disks of the same capacity, RAID6 or RAIDZ2 only gets you 20% more capacity than btrfs RAID1. That's not exactly a huge disparity, usually not enough to be a show-stopper on its own. There are plenty of scenarios where the features ZFS has which btrfs lacks are more important than the features that btrfs has which ZFS lacks. My point is simply that btrfs RAID1 has its uses and shouldn't be dismissed out of hand. |
|
|
|
|
|
| ▲ | AaronFriel 4 days ago | parent | prev | next [-] |
| Respectfully to the maintainers: How can this be a stable filesystem if parity is unstable and risks data loss? How has this been allowed to happen? It just seems so profoundly unserious to me. |
| |
| ▲ | wtallis 4 days ago | parent [-] | | Does the whole filesystem need to be marked as unstable if it has a single experimental feature? Is any other filesystem held to that standard? | | |
| ▲ | AaronFriel 4 days ago | parent | next [-] | | Parity support in multi-disk arrays is older than I am, it's a fairly standard feature. btrfs doesn't support this without data loss risks after 17 years of development. | | |
| ▲ | wtallis 4 days ago | parent [-] | | If you're not interested in a multi-disk storage system that doesn't have (stable, non-experimental) parity modes, that's a valid personal preference but not at all a justification for the position that the rest of the features cannot be stable and that the project as a whole cannot be taken seriously by anyone. | | |
| |
| ▲ | nextaccountic 4 days ago | parent | prev [-] | | Maybe this specific feature should be marked as unstable and default to disabled on most kernel builds unless you add something like btrfs.experimental=1 to the kernel line or something |
|
|
|
| ▲ | __turbobrew__ 4 days ago | parent | prev | next [-] |
| How can I know what configurations of btrfs lose my data? I also have had to deal with thousands of nodes kernel panicing due to a btrfs bug in linux kernel 6.8 (stable ubuntu release). |
| |
| ▲ | ffsm8 4 days ago | parent | next [-] | | I thought the usual recommendation was to use mdadm to build the disk pool and then use btrfs on top of that - but that might be out of date. I haven't used it in a while | | |
| ▲ | necheffa 3 days ago | parent [-] | | This is very much a big compromise where you decide for yourself that storage capacity and maybe throughput are more important than anything else. The md metadata is not adequately protected. Btrfs checksums can tell you when a file has gone bad but not self-heal. And I'm sure there are going to be caching/perf benefits left on the table not having btrfs manage all the block storage itself. |
| |
| ▲ | mook 4 days ago | parent | prev [-] | | I thought most distros have basically disabled the footgun modes at this point; that is, using the configuration that would lose data means you'd need to work hard to get there (at which point you should have been able to see all the warnings about data loss). | | |
| ▲ | __turbobrew__ 3 days ago | parent [-] | | See the part of my comment where the btrfs kernel driver paniced on Ubuntu 24 stable kernel. We are using a fairly simple config, but under certain heavy load patterns the kernel would panic: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux... I hear people say all the time how btrfs is stable now and people are just complaining about issues when btrfs is new, but please explain to me how the bug I linked is OK in a stable version of the most popular linux distro? |
|
|
|
| ▲ | rendaw 4 days ago | parent | prev | next [-] |
| > on your tiny ThinkPad SSD Ad hominem. My thinkpad ssd is massive. |
| |
|
| ▲ | risho 4 days ago | parent | prev [-] |
| as it turns out raid 5 and 6 being broken is kind of a big deal for people. its also far from ideal that the filesystem has random landmines that you can accidentally step on if you don't happen to read hacker news every day. |
| |
| ▲ | jorams 3 days ago | parent [-] | | FWIW: RAID 5 and 6 having problems is not a random hole you'll accidentally stumble into. The man page for mkfs.btrfs says: > Warning: RAID5/6 has known problems and should not be used in production. When you actually tell it to use raid5 or raid6, mkfs.btrfs will also print a large warning: > WARNING: RAID5/6 support has known problems is strongly discouraged to be used besides testing or evaluation. |
|