Remix.run Logo
AnotherGoodName 6 hours ago

It's actually not the best at enwik8 or 9.

The results at https://www.mattmahoney.net/dc/text.html explicitly add the size of the compressor itself to the result. Note the "enwik9+prog" column. That's what it's ranked on.

The reason to do this is that it's trivial to create a compressor that 'compresses' a file to 0 bytes. Just have an executable with a dictionary of enwik9 that writes that out given any input. So we always measure what is effectively the Kolmogorov complexity. The data+program as a whole that produces the result we want.

So those results add in the compressor size. The programs there generally have no dictionary built in or in the case of LLM based compressors, no pre-trained data. They effectively build the model as they process data. Not compressing much at all at the start and slowly compressing better and better as they go. This is why these programs do better and better with larger data sets. They start with 0 knowledge. After a GB or so they have very good knowledge of the corpus of human language.

This program here however is pre-trained and shipped with a model. It's 150MB in size! This means it has 150MB of extra starting knowledge over those models in that list. The top models in that list are the better compressors, they'll quickly out learn and overtake this compressor but they just don't have that headstart.

Of course measuring fairly this should be listed with that 150MB program size added to the results when doing a comparison.

srcreigh 6 hours ago | parent | next [-]

As an aside, I wonder how to account for the information content embedded in the hardware itself.

A Turing Machine compressor program would likely have more bytes than the amd64 binary. So how to evaluate KolmogorovComplexity(amd64)?

The laws of physics somehow need to be accounted for too, probably.

Dylan16807 28 minutes ago | parent | next [-]

The complexity of a simple turing machine is itty bitty, and you can bootstrap that into an x86 emulator in a matter of kilobytes, so when we're messing with 100MB files it's not a big factor.

d_burfoot 6 hours ago | parent | prev [-]

Kolmogorov Complexity is only defined up to a constant, which represents Turing machine translation length.

notpushkin 17 minutes ago | parent [-]

I guess we need guesstimate the length of a shortest Turing machine implementation of amd64 then?

rao-v 3 hours ago | parent | prev [-]

If every version of your OS ships with a default local model, it maybe interesting to see compression conditioned on the standard LLM weights