▲ | tdullien 9 days ago | |
Every predictor is a compressor, every compressor is a predictor. If you're interested in this, it's a good idea reading about the Hutter prize (https://en.wikipedia.org/wiki/Hutter_Prize) and going from there. In general, lossless compression works by predicting the next (letter/token/frame) and then encoding the difference from the prediction in the data stream succinctly. The better you predict, the less you need to encode, the better you compress. The flip side of this is that all fields of compression have a lot to gain from progress in AI. | ||
▲ | rahimnathwani 8 days ago | parent [-] | |
Also check out this contest: https://www.mattmahoney.net/dc/text.html Fabrice Bellard's nncp (mentioned in a different comment) leads. |