Remix.run Logo
_hark 19 hours ago

The issue with the invariance theorem you point out always bugged me.

Let s be an algorithmically random string relative to UTM A. Is it the case that there exists some pathological UTM S, such that K(s|S) (the Kolmogorov complexity of s relative to S) is arbitrarily small? I.e. the blank print statement of S produces s. And there always exists such an S for any s?

Is there some way of defining a meta-complexity measure, the complexity of some UTM without a reference UTM? It seems intuitive that although some pathological UTM might exist that can "compress" whichever string you have, its construction appears very unnatural. Is there some way of formalizing this "naturalness"?

Xcelerate 14 hours ago | parent | next [-]

> Is it the case that there exists some pathological UTM S, such that K(s|S) (the Kolmogorov complexity of s relative to S) is arbitrarily small

Yes. It’s not even that hard to create. Just take a standard UTM and perform a branching “if” statement to check if the input is the string of interest before executing any other instructions.

> Is there some way of defining a meta-complexity measure, the complexity of some UTM without a reference UTM?

Haha, not that anyone knows of. This is one of the issues with Solomonoff induction as well. Which UTM do we pick to make our predictions? If no UTM is privileged over any other, then some will necessarily give very bad predictions. Averaged over all possible induction problems, no single UTM can be said to be superior to the others either. Solomonoff wrote an interesting article about this predicament a while ago.

(A lot of people will point to the constant offset of Kolmogorov complexity due to choice of UTM as though it somehow trivializes the issue. It does not. That constant is not like the constant in time complexity which is usually safe to ignore. In the case of Solomonoff induction, it totally changes the probability distribution over possible outcomes.)

_hark 6 hours ago | parent [-]

Interesting. I guess then we would only be interested in the normalized complexity of infinite strings, e.g. lim n-> \infty K(X|n)/n where X is an infinite set of numbers (e.g. the decimal expansion of some real number), and K(X|n) is the complexity of the first n of them. This quantity should still be unique w/o reference to the choice of UTM, no?

Ono-Sendai 13 hours ago | parent | prev [-]

You are correct to be bugged IMO, I agree with you. My thoughts: https://forwardscattering.org/page/Kolmogorov%20complexity

Kolmogorov complexity is useless as an objective measure of complexity.

Xcelerate 6 hours ago | parent | next [-]

Nice blog post. I wasn’t aware of those comments by Yann LeCunn and Murray Gell-Mann, but it’s reassuring to know there are some experts who have been wondering about this “flaw” in Kolmogorov complexity as well.

I wouldn’t go so far as to say Kolmogorov complexity is useless as an objective complexity measure however. The invariance theorem does provide a truly universal and absolute measure of algorithmic complexity — but it’s the complexity between two things rather than of one thing. You can think of U and V as “representatives” of any two partial recursive functions u(x) and v(x) capable of universal computation. The constant c(u, v) is interesting then because it is a natural number that depends only on the two abstract functions themselves and not the specific Turing machines that compute the functions.

What does that mean philosophically? I’m not sure. It might mean that the notion of absolute complexity for a finite string isn’t a coherent concept, i.e., complexity is fundamentally a property of the relationship between things rather than of a thing.

Ono-Sendai 5 hours ago | parent [-]

Yeah, we may have to take seriously that the notion of complexity for a finite string (or system) has no reasonable definition.

_hark 6 hours ago | parent | prev [-]

Hmm. At least it's still fine to define limits of the complexity for infinite strings. That should be unique, e.g.:

lim n->\infty K(X|n)/n

Possible solutions that come tom mind:

1) UTMs are actually too powerful, and we should use a finitary abstraction to have a more sensible measure of complexity for finite strings.

2) We might need to define a kind of "relativity of complexity". This is my preferred approach and something I've thought about to some degree. That is, that we want a way of describing the complexity of something relative to our computational resources.