Remix.run Logo
torginus 6 days ago

What are his arguments then?

>Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in the normal sense of the word.

This is the only sentence that seems to be pointing to what constitutes the specialness of humans, and the terms of 'understanding' and 'intentionality' are in air quotes so who knows? This sounds like the archetypical no true scotsman fallacy.

In mathematical analysis, if we conclude that the difference between 2 numbers is smaller than any arbitrary number we can pick, those 2 numbers must be the same. In engineering, we can reduce the claim to 'any difference large about to care about'

Likewise if the difference between a real human brain and an arbitrarily sophisticated Chinese Room brain is arbitrarily small, they are the same.

If our limited understanding of physics and engineering makes the practical difference not zero, this essentially becomes a bit of a somewhat magical 'superscience' argument claiming we can't simulate the real world to a good enough resolution that the meaningful differences between our 'consciousness simulator' and the thing itself disappear - which is an extraordinary claim.

CrossVR 6 days ago | parent [-]

> What are his arguments then?

They're in the "Complete Argument" section of the article.

> This sounds like the archetypical no true scotsman fallacy.

I get what you're trying to say, but he is not arguing only a true Scotsman is capable of thought. He is arguing that our current machines lack the required "causal powers" for thought. Powers that he doesn't prescribe to only a true Scotsman, though maybe we should try adding bagpipes to our AI just to be sure...

torginus 6 days ago | parent [-]

Thanks, but that makes his arguments even less valid.

He argues that computer programs only manipulate symbols and thus have no semantic understanding.

But that's not true - many programs, like compilers that existed back when the argument was made, had semantic understanding of the code (in a limited way, but they did have some understanding about what the program did).

LLMs in contrast have a very rich semantic understanding of the text they parse - their tensor representations encode a lot about each token, or you can just ask them about anything - they might not be human level at reading subtext, but they're not horrible either.

CrossVR 6 days ago | parent [-]

Now you're getting to the heart of the thought experiment. Because does it really understand the code or subtext, or is it just really good at fooling us that it does?

When it makes a mistake, did it just have a too limited understanding or did it simply not get lucky with its prediction of the next word? Is there even a difference between the two?

I would like to agree with you that there's no special "causal power" that Turing machines can't emulate. But I remain skeptical, not out of chauvinism, but out of caution. Because I think it's dangerous to assume an AI understands a problem simply because it said the right words.