▲ | hgomersall 7 hours ago | |||||||
Not necessarily. Consider a big file of random uniformly distributed bytes. It's easy to show that in practice some bytes are more common than others (because random), and that necessarily therefore the expected spacing between those specific bytes is less than 256, which gives you a small fraction of a bit you can save in a recoding of those specific bytes (distance from last byte of a specific value). With a big enough file those fractions of a bit add up to a non trivial number of bits. You can be cunning about how you encode your deltas too (next delta makes use of remaining unused bits from previous delta). I haven't worked through all the details, so it might be in the end result everything rebalances to say no, but I'd like to withhold judgement for the moment. | ||||||||
▲ | l33t7332273 3 hours ago | parent [-] | |||||||
>It's easy to show that in practice some bytes are more common than others (because random) I don’t follow. Wouldn’t that be (because not random) | ||||||||
|