▲ | gus_massa 5 hours ago | |
Nice idea, but doesn't this require a linear increase of the length of the partial files and a quadratic size of the original file? If the length of a file is X, then in the next file you must skip the first X characters and look for a "5" that in average is in the X+128 position. So the average length of the Nth file is 128*N and if you want to reduce C bytes the size of the original file should be ~128C^2/2 (instead of the linear 128*C in the article). | ||
▲ | mhandley 34 minutes ago | parent [-] | |
Yes, I think it is quadratic. I don't claim it's practical (the original isn't practical either though), but just that the dependency on filenames isn't fundamental. |