▲ | nullc 3 days ago | |
One thing to keep in mind is that correction always comes as some expense of detection. Generally a code that can always detect N errors can only always correct N/2 errors. So you detect an errored block, you correct up to N/2 errors. The block now passes but if the block actually had N errors, your correction will be incorrect and you now have silent corruption. The solution to this is just to have an excess of error correction power and then don't use all of it. But that can be hard to do if you're trying to shoehorn it into an existing 32-bit crc. How big are the blocks that the CRC units cover in bcachefs? | ||
▲ | koverstreet 3 days ago | parent [-] | |
bcachefs checksums (and compresses) at extent granularity, not block; encoded extents (checksummed/compressed) are limited to 128k by default. This is a really good tradeoff in practice; the vast majority of applications are doing buffered IO, not small block O_DIRECT reads - that really only comes up in benchmarks :) And it gets us better compression ratios and better metadata overhead. We also have quite a bit of flexibility to add something bigger to the extent for FEC, if we need to - we're not limited to a 32/64 bit checksum. |