Remix.run Logo
torginus 6 days ago

I'd appreciate if you tried to explain why instead of resorting to ad hominem.

> I think it's entirely valid to question whether a computer can form an understanding through deterministically processing instructions, whether that be through programming language or language training data.

Since the real world (including probabilistic and quantum phenomena) can be modeled with deterministic computation (a pseudorandom sequence is deterministic, yet simulates randomness), if we have a powerful enough computer we can simulate the brain to a sufficient degree to have it behave identically as the real thing.

The original 'Chinese Room' experiment describes a book of static rules of Chinese - which is probably not the way to go, and AI does not work like that. It's probabilistic in its training and evaluation.

What you are arguing is that constructing an artificial consciousness lies beyond our current computational ability(probably), and understanding of physics (possibly), but that does not rule out that we might solve these issues at some point, and there's no fundamental roadblock to artificial consciousness.

I've re-read the argument (https://en.wikipedia.org/wiki/Chinese_room) and I cannot help but conclude that Searle argues that 'understanding' is only something that humans can do, which means that real humans are special in some way a simulation of human-shaped atoms are not.

Which is an argument for the existence of the supernatural and deist thinking.

CrossVR 6 days ago | parent | next [-]

> I'd appreciate if you tried to explain why instead of resorting to ad hominem.

It is not meant as an ad hominem. If someone thinks our current computers can't emulate human thinking and draws the conclusion that therefore humans have special powers given to them by a deity, then that probably means that person is quite religious.

I'm not saying you personally believe that and therefore your arguments are invalid.

> Since the real world (including probabilistic and quantum phenomena) can be modeled with deterministic computation (a pseudorandom sequence is deterministic, yet simulates randomness), if we have a powerful enough computer we can simulate the brain to a sufficient degree to have it behave identically as the real thing.

The idea that a sufficiently complex pseudo-random number generator can emulate real-world non-determinism enough to fully simulate the human brain is quite an assumption. It could be true, but it's not something I would accept as a matter of fact.

> I've re-read the argument (https://en.wikipedia.org/wiki/Chinese_room) and I cannot help but conclude that Searle argues that 'understanding' is only something that humans can do, which means that real humans are special in some way a simulation of human-shaped atoms are not.

In that same Wikipedia article Searle denies he's arguing for that. And even if he did secretly believe that, it doesn't really matter, because we can draw our own conclusions.

Disregarding his arguments because you feel he holds a hidden agenda, isn't that itself an ad hominem?

(Also, I apologize for using two accounts, I'm not attempting to sock puppet)

torginus 6 days ago | parent [-]

What are his arguments then?

>Searle argues that, without "understanding" (or "intentionality"), we cannot describe what the machine is doing as "thinking" and, since it does not think, it does not have a "mind" in the normal sense of the word.

This is the only sentence that seems to be pointing to what constitutes the specialness of humans, and the terms of 'understanding' and 'intentionality' are in air quotes so who knows? This sounds like the archetypical no true scotsman fallacy.

In mathematical analysis, if we conclude that the difference between 2 numbers is smaller than any arbitrary number we can pick, those 2 numbers must be the same. In engineering, we can reduce the claim to 'any difference large about to care about'

Likewise if the difference between a real human brain and an arbitrarily sophisticated Chinese Room brain is arbitrarily small, they are the same.

If our limited understanding of physics and engineering makes the practical difference not zero, this essentially becomes a bit of a somewhat magical 'superscience' argument claiming we can't simulate the real world to a good enough resolution that the meaningful differences between our 'consciousness simulator' and the thing itself disappear - which is an extraordinary claim.

CrossVR 6 days ago | parent [-]

> What are his arguments then?

They're in the "Complete Argument" section of the article.

> This sounds like the archetypical no true scotsman fallacy.

I get what you're trying to say, but he is not arguing only a true Scotsman is capable of thought. He is arguing that our current machines lack the required "causal powers" for thought. Powers that he doesn't prescribe to only a true Scotsman, though maybe we should try adding bagpipes to our AI just to be sure...

torginus 6 days ago | parent [-]

Thanks, but that makes his arguments even less valid.

He argues that computer programs only manipulate symbols and thus have no semantic understanding.

But that's not true - many programs, like compilers that existed back when the argument was made, had semantic understanding of the code (in a limited way, but they did have some understanding about what the program did).

LLMs in contrast have a very rich semantic understanding of the text they parse - their tensor representations encode a lot about each token, or you can just ask them about anything - they might not be human level at reading subtext, but they're not horrible either.

CrossVR 6 days ago | parent [-]

Now you're getting to the heart of the thought experiment. Because does it really understand the code or subtext, or is it just really good at fooling us that it does?

When it makes a mistake, did it just have a too limited understanding or did it simply not get lucky with its prediction of the next word? Is there even a difference between the two?

I would like to agree with you that there's no special "causal power" that Turing machines can't emulate. But I remain skeptical, not out of chauvinism, but out of caution. Because I think it's dangerous to assume an AI understands a problem simply because it said the right words.

dahart 6 days ago | parent | prev [-]

> I cannot help but conclude that Searle argues that ‘understanding’ is only something that humans can do, which means…

Regardless of whether Searle is right or wrong, you’ve jumped to conclusions and are misunderstanding his argument and making further assumptions based on your misunderstanding. Your argument is also ad-hominem by accusing people of believing things they don’t believe. Maybe it would be prudent to read some of the good critiques of Searle before trying to litigate it rapidly and sloppily on HN.

The randomness stuff is very straw man, definitely not a good argument, best to drop it. Today’s LLMs are deterministic, not random. Pseudorandom sequences come in different varieties, but they model some properties of randomness, not all of them. The functioning of today’s neural networks, both training and inference, is exactly a book of static rules, despite their use of pseudorandom sequences.

In case you missed it in the WP article, most of the field of cognitive science thinks Searle is wrong. However, they’re largely not critiquing him for using metaphysics, because that’s not his argument. He’s arguing that biology has mechanisms that binary electronic circuitry doesn’t; not human brains, simply physical chemical and biological processes. That much is certainly true. Whether there’s a difference in theory is unproven. But today currently there absolutely is a difference in practice, nobody has ever simulated the real world or a human brain using deterministic computation.

torginus 6 days ago | parent [-]

If scientific consensus is that he's wrong why is he being constantly brought up and defended - am I not right to call them out then?

Nobody brings up that light travels through the aether, that diseases are caused by bad humors etc. - is it not right to call out people for stating theory that's believed to be false?

>The randomness stuff is very straw man,

And a direct response to what armada651 wrote:

>I think it's entirely valid to question whether a computer can form an understanding through deterministically processing instructions, whether that be through programming language or language training data.

> He’s arguing that biology has mechanisms that binary electronic circuitry doesn’t; not human brains, simply physical chemical and biological processes.

Once again the argument here changed from 'computers which only manipulate symbols cannot create consciousness' to 'we don't have the algorithm for consiousness yet'.

He might have successfully argued against the expert systems of his time - and true, mechanistic attempts at language translation have largely failed - but that doesn't extend to modern LLMs (and pre LLM AI) or even statistical methods.

dahart 6 days ago | parent [-]

You’re making more assumptions. There’s no “scientific consensus” that he’s wrong, there are just opinions. Unlike the straw man examples you bring up, nobody has proven the claims you’re making. If they had, then the argument would go away like the others you mentioned.

Where did the argument change? Searle’s argument that you quoted is not arguing that we don’t have the algorithm yet. He’s arguing that the algorithm doesn’t run on electrical computers.

I’m not defending his argument, just pointing out that yours isn’t compelling because you don't seem to fully understand his, at least your restatement of it isn’t a good faith interpretation. Make his argument the strongest possible argument, and then show why it doesn’t work.

IMO modern LLMs don’t prove anything here. They don’t understand anything. LLMs aren’t evidence that computers can successfully think, they only prove that humans are prone to either anthropomorphic hyperbole, or to gullibility. That doesn’t mean computers can’t think, but I don’t think we’ve seen it yet, and I’m certainly not alone there.

torginus 5 days ago | parent [-]

>most of the field of cognitive science thinks Searle is wrong.

>There’s no “scientific consensus” that he’s wrong, there are just opinions.

dahart 5 days ago | parent [-]

And? Are you imagining that these aren’t both true at the same time? If so, I’m happy to explain. Since nothing has been proven, there’s nothing “scientific”. And since there’s some disagreement, “consensus” has not been achieved yet. This is why your presumptive use of “scientific consensus” was not correct, and why the term “scientific consensus” is not the same thing as “most people think”. A split of 60/40 or 75/25 or even 90/10 counts as “most” but does not count as “consensus”. So I guess maybe be careful about assuming what something means, it seems like this thread was limited by several incorrect assumptions.