Remix.run Logo
Xcelerate 20 hours ago

There's another interesting loophole to these general compression challenges. A universal Turing machine will necessarily compress some number of strings (despite the fact that almost all strings are incompressible). The set of compressible strings varies depending on the choice of UTM, and if your UTM if fixed, you're out of luck for random data. But if the UTM is unspecified, then there exist an infinite number of UTMs that will compress any specific string.

To some extent, this resembles the approach of "hiding the data in the decompressor". But the key difference here is that you can make it less obvious by selecting a particular programming language capable of universal computation, and it is the choice of language that encodes the missing data. For example, suppose we have ~17k programming languages to choose from—the language selection itself encodes about 14 bits (log2(17000)) of information.

If there are m bits of truly random data to compress and n choices of programming languages capable of universal computation, then as n/m approaches infinity, the probability of at least one language being capable of compressing an arbitrary string approaches 1. This ratio is likely unrealistically large for any amount of random data more than a few bits in length.

There's also the additional caveat that we're assuming the set of compressible strings is algorithmically independent for each choice of UTM. This certainly isn't the case. The invariance theorem states that ∀x|K_U(x) - K_V(x)| < c for UTMs U and V, where K is Kolmogorov complexity and c is a constant that depends only on U and V. So in our case, c is effectively the size of the largest minimal transpiler between any two programming languages that we have to choose from, and I'd imagine c is quite small.

Oh, and this all requires computing the Kolmogorov complexity of a string of random data. Which we can't, because that's uncomputable.

Nevertheless it's an interesting thought experiment. I'm curious what the smallest value of m is such that we could realistically compress a random string of length 2^m given the available programming languages out there. Unfortunately, I imagine it's probably like 6 bits, and no one is going to give you an award for compressing 6 bits of random data.

SideQuark 16 hours ago | parent | next [-]

The problem with the UTM argument designed for a specific string is that the UTM size grows without bound. You also don’t need a UTM, which will be much larger than a simple TM or finite automaton. And now we’re back to the original problem: designing a machine capable of compressing the specific string and with smaller description.

UTM provides no gain.

Xcelerate 13 hours ago | parent [-]

You cannot specify the “size” of an arbitrary Turing machine without knowing the index of that machine in some encoding scheme. If we are to account for the possibility of any string being selected as the one to compress, and if we wish to consider all possible algorithmic ways to do this, then the encoding scheme necessarily corresponds to that of a universal Turing machine.

It’s a moot point anyway as most programming languages are capable of universal computation.

Again, I’m not claiming we can compress random data. I’m claiming we can (sometimes) make it look like we can compress random data by selecting a specific programming language from one of many possibilities. This selection itself constitutes the “missing” information.

SideQuark 4 hours ago | parent [-]

> You cannot specify the “size” of an arbitrary Turing machine

Related to compression this is, as I said, irrelevant. Kolmogorov complexity, in the first order naive sense, picks a machine then talks about length. In the complete sense, due to Kolmorogov invariance which extends the naive version to any possible encoding, it shows that there is a fundamental minimal length over any machine Then one proves that the vast majority of strings are incompressible. Withtout this machine invariance there could be no notion of inherent Kolmorogov complexity of a string.

This has all been known since the 1960s.

So no, it matters not which UTM, FSM, TM, Java, Hand coded wizard machine, or whatever you choose. This line of reasoning has been investigated decades ago and thrown out since it does not work.

Your claim

>then there exist an infinite number of UTMs that will compress any specific string

combined with your claim

> For example, suppose we have ~17k programming languages to choose from—the language selection itself encodes about 14 bits (log2(17000)) of information.

Does not let you use that information to compress the string you wish to compress, unless you're extremely lucky that the string matches the particular information in the language choice matches, say, the string prefix so you know how to use the language encoded information to apply to the string. Otherwise you need to also encode the information of how the language choice maps to the prefix (or any other part), and that information is going to be as large or larger than what you want. Your program, to reconstruct the data you think language choice hides, will have to restore that data - how does your program choice exactly do this? Does it list all 17k languages and look up the proper information? Whoops, once again your program is too big.

So no, you cannot get a free ride with this choice.

By your argument, Kolmogorov complexity (the invariance version over any possible machine) allows compression of any string, which has been proven impossible for decades.

_hark 19 hours ago | parent | prev | next [-]

The issue with the invariance theorem you point out always bugged me.

Let s be an algorithmically random string relative to UTM A. Is it the case that there exists some pathological UTM S, such that K(s|S) (the Kolmogorov complexity of s relative to S) is arbitrarily small? I.e. the blank print statement of S produces s. And there always exists such an S for any s?

Is there some way of defining a meta-complexity measure, the complexity of some UTM without a reference UTM? It seems intuitive that although some pathological UTM might exist that can "compress" whichever string you have, its construction appears very unnatural. Is there some way of formalizing this "naturalness"?

Xcelerate 14 hours ago | parent | next [-]

> Is it the case that there exists some pathological UTM S, such that K(s|S) (the Kolmogorov complexity of s relative to S) is arbitrarily small

Yes. It’s not even that hard to create. Just take a standard UTM and perform a branching “if” statement to check if the input is the string of interest before executing any other instructions.

> Is there some way of defining a meta-complexity measure, the complexity of some UTM without a reference UTM?

Haha, not that anyone knows of. This is one of the issues with Solomonoff induction as well. Which UTM do we pick to make our predictions? If no UTM is privileged over any other, then some will necessarily give very bad predictions. Averaged over all possible induction problems, no single UTM can be said to be superior to the others either. Solomonoff wrote an interesting article about this predicament a while ago.

(A lot of people will point to the constant offset of Kolmogorov complexity due to choice of UTM as though it somehow trivializes the issue. It does not. That constant is not like the constant in time complexity which is usually safe to ignore. In the case of Solomonoff induction, it totally changes the probability distribution over possible outcomes.)

_hark 6 hours ago | parent [-]

Interesting. I guess then we would only be interested in the normalized complexity of infinite strings, e.g. lim n-> \infty K(X|n)/n where X is an infinite set of numbers (e.g. the decimal expansion of some real number), and K(X|n) is the complexity of the first n of them. This quantity should still be unique w/o reference to the choice of UTM, no?

Ono-Sendai 13 hours ago | parent | prev [-]

You are correct to be bugged IMO, I agree with you. My thoughts: https://forwardscattering.org/page/Kolmogorov%20complexity

Kolmogorov complexity is useless as an objective measure of complexity.

Xcelerate 6 hours ago | parent | next [-]

Nice blog post. I wasn’t aware of those comments by Yann LeCunn and Murray Gell-Mann, but it’s reassuring to know there are some experts who have been wondering about this “flaw” in Kolmogorov complexity as well.

I wouldn’t go so far as to say Kolmogorov complexity is useless as an objective complexity measure however. The invariance theorem does provide a truly universal and absolute measure of algorithmic complexity — but it’s the complexity between two things rather than of one thing. You can think of U and V as “representatives” of any two partial recursive functions u(x) and v(x) capable of universal computation. The constant c(u, v) is interesting then because it is a natural number that depends only on the two abstract functions themselves and not the specific Turing machines that compute the functions.

What does that mean philosophically? I’m not sure. It might mean that the notion of absolute complexity for a finite string isn’t a coherent concept, i.e., complexity is fundamentally a property of the relationship between things rather than of a thing.

Ono-Sendai 5 hours ago | parent [-]

Yeah, we may have to take seriously that the notion of complexity for a finite string (or system) has no reasonable definition.

_hark 6 hours ago | parent | prev [-]

Hmm. At least it's still fine to define limits of the complexity for infinite strings. That should be unique, e.g.:

lim n->\infty K(X|n)/n

Possible solutions that come tom mind:

1) UTMs are actually too powerful, and we should use a finitary abstraction to have a more sensible measure of complexity for finite strings.

2) We might need to define a kind of "relativity of complexity". This is my preferred approach and something I've thought about to some degree. That is, that we want a way of describing the complexity of something relative to our computational resources.

tugu77 19 hours ago | parent | prev [-]

[dead]