Remix.run Logo
compsciphd 5 hours ago

par2 is very limited.

It only support 32k parts in total (or in reality that means in practice 16k parts of source and 16k parts of parity).

Lets take 100GB of data (relatively large, but within realm of reason of what someone might want to protect), that means each part will be ~6MB in size. But you're thinking you also created 100GB of parity data (6MB*16384 parity parts) so you're well protected. You're wrong.

Now lets say one has 20000 random bit error over that 100GB. Not a lot of errors, but guess what, par will not be able to protect you (assuming those 20000 errors are spread over > 16384 blocks it precalculated in the source). so at the simplest level , 20KB of errors can be unrecoverable.

par2 was created for usenet when a) the size of binaries being posted wasn't so large b) the size of article parts being posted wasn't so large c) the error model they were trying to protect was whole articles not coming through or equivalently having errors. In the olden days of usenet binary posting you would see many "part repost requests", that basically disappeared with par (then quickly par2) introduction. It fails badly with many other error models.

catlikesshrimp 7 minutes ago | parent | next [-]

you can split files so you can have more par blocks (100GB in 100 1GB parts 32k blocks per part)

e145bc455f1 4 hours ago | parent | prev [-]

what other tool do you recommend?

iberator 4 hours ago | parent [-]

just pay for storage instead. It's absurd that rich developers are doing ANYTHING but to pay for basic services - ruining the internet for those in real need.

we can't have nice things