Remix.run Logo
auggierose 8 days ago

First: true propositions (that are not provable) can definitely be expressed, if they couldn't, the incompleteness theorem would not be true ;-)

It would be interesting to know what the percentage of people is, who invoke the incompleteness theorem, and have no clue what it actually says.

Most people don't even know what a proof is, so that cannot be a hindrance on the path to AGI ...

Second: ANY world model that can be digitally represented would be subject to the same argument (if stated correctly), not only LLMs.

bithive123 8 days ago | parent [-]

I knew someone would call me out on that. I used the wrong word; what I meant was "expressed in a way that would satisfy" which implies proof within the symbolic order being used. I don't claim to be a mathematician or philosopher.

auggierose 8 days ago | parent [-]

Well, you don't get it. The LLM definitely can state propositions "that satisfy", let's just call them true propositions, and that this is not the same as having a proof for it is what the incompleteness theorem says.

Why would you require an LLM to have proof for the things it says? I mean, that would be nice, and I am actually working on that, but it is not anything we would require of humans and/or HN commenters, would we?

bithive123 8 days ago | parent [-]

I clearly do not meet the requirements to use the analogy.

I am hearing the term super intelligence a lot and it seems to me the only form that would take is the machine spitting out a bunch of symbols which either delight or dismay the humans. Which implies they already know what it looks like.

If this technology will advance science or even be useful for everyday life, then surely the propositions it generates will need to hold up to reality, either via axiomatic rigor or empirically. I look forward to finding out if that will happen.

But it's still just a movement from the known to the known, a very limited affair no matter how many new symbols you add in whatever permutation.