Remix.run Logo
torginus 6 days ago

If scientific consensus is that he's wrong why is he being constantly brought up and defended - am I not right to call them out then?

Nobody brings up that light travels through the aether, that diseases are caused by bad humors etc. - is it not right to call out people for stating theory that's believed to be false?

>The randomness stuff is very straw man,

And a direct response to what armada651 wrote:

>I think it's entirely valid to question whether a computer can form an understanding through deterministically processing instructions, whether that be through programming language or language training data.

> He’s arguing that biology has mechanisms that binary electronic circuitry doesn’t; not human brains, simply physical chemical and biological processes.

Once again the argument here changed from 'computers which only manipulate symbols cannot create consciousness' to 'we don't have the algorithm for consiousness yet'.

He might have successfully argued against the expert systems of his time - and true, mechanistic attempts at language translation have largely failed - but that doesn't extend to modern LLMs (and pre LLM AI) or even statistical methods.

dahart 6 days ago | parent [-]

You’re making more assumptions. There’s no “scientific consensus” that he’s wrong, there are just opinions. Unlike the straw man examples you bring up, nobody has proven the claims you’re making. If they had, then the argument would go away like the others you mentioned.

Where did the argument change? Searle’s argument that you quoted is not arguing that we don’t have the algorithm yet. He’s arguing that the algorithm doesn’t run on electrical computers.

I’m not defending his argument, just pointing out that yours isn’t compelling because you don't seem to fully understand his, at least your restatement of it isn’t a good faith interpretation. Make his argument the strongest possible argument, and then show why it doesn’t work.

IMO modern LLMs don’t prove anything here. They don’t understand anything. LLMs aren’t evidence that computers can successfully think, they only prove that humans are prone to either anthropomorphic hyperbole, or to gullibility. That doesn’t mean computers can’t think, but I don’t think we’ve seen it yet, and I’m certainly not alone there.

torginus 5 days ago | parent [-]

>most of the field of cognitive science thinks Searle is wrong.

>There’s no “scientific consensus” that he’s wrong, there are just opinions.

dahart 5 days ago | parent [-]

And? Are you imagining that these aren’t both true at the same time? If so, I’m happy to explain. Since nothing has been proven, there’s nothing “scientific”. And since there’s some disagreement, “consensus” has not been achieved yet. This is why your presumptive use of “scientific consensus” was not correct, and why the term “scientific consensus” is not the same thing as “most people think”. A split of 60/40 or 75/25 or even 90/10 counts as “most” but does not count as “consensus”. So I guess maybe be careful about assuming what something means, it seems like this thread was limited by several incorrect assumptions.