Remix.run Logo
torginus 6 days ago

Thanks, but that makes his arguments even less valid.

He argues that computer programs only manipulate symbols and thus have no semantic understanding.

But that's not true - many programs, like compilers that existed back when the argument was made, had semantic understanding of the code (in a limited way, but they did have some understanding about what the program did).

LLMs in contrast have a very rich semantic understanding of the text they parse - their tensor representations encode a lot about each token, or you can just ask them about anything - they might not be human level at reading subtext, but they're not horrible either.

CrossVR 6 days ago | parent [-]

Now you're getting to the heart of the thought experiment. Because does it really understand the code or subtext, or is it just really good at fooling us that it does?

When it makes a mistake, did it just have a too limited understanding or did it simply not get lucky with its prediction of the next word? Is there even a difference between the two?

I would like to agree with you that there's no special "causal power" that Turing machines can't emulate. But I remain skeptical, not out of chauvinism, but out of caution. Because I think it's dangerous to assume an AI understands a problem simply because it said the right words.