▲ | mhandley 7 months ago | ||||||||||||||||||||||
You could do the same spitting trick but only split at progressively increasing file lengths at the character '5'. The "compression" would be worse, so you'd need a larger starting file, but you could still satisfy the requirements this way and be independent of the filenames. The decompressor would just sort the files by increasing length before merging. | |||||||||||||||||||||||
▲ | gus_massa 7 months ago | parent | next [-] | ||||||||||||||||||||||
Nice idea, but doesn't this require a linear increase of the length of the partial files and a quadratic size of the original file? If the length of a file is X, then in the next file you must skip the first X characters and look for a "5" that in average is in the X+128 position. So the average length of the Nth file is 128*N and if you want to reduce C bytes the size of the original file should be ~128C^2/2 (instead of the linear 128*C in the article). | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | anamexis 7 months ago | parent | prev [-] | ||||||||||||||||||||||
That's a neat idea. |