| ▲ | Terr_ 3 days ago |
| Someone asserts, almost religiously, that LLMs do and/or can "think." When asked how to falsify their assertion, perhaps by explaining what exactly is "thinking" in the human brain that can and/or will be possible to emulate... |
|
| ▲ | umanwizard 3 days ago | parent | next [-] |
| Err, no, that’s not what’s happening. Nobody, at least in this thread (and most others like it I’ve seen), is confidently claiming LLMs can think. There are people confidently claiming they can’t and then other people expressing skepticism at their confidence and/or trying to get them to nail down what they mean. |
| |
| ▲ | jayveeone 3 days ago | parent [-] | | This entire debate over the years is because so many confidently assert that AI can think, or that AI will soon be our God, or our ruler, etc. |
|
|
| ▲ | gfdvgfffv 3 days ago | parent | prev | next [-] |
| One mostly sees people aggressively claiming they can’t, ever. On the other side people seem to simply allow that they might, or might eventually. |
|
| ▲ | superkuh 3 days ago | parent | prev | next [-] |
| Or they just point to the turing test which was the defacto standard test for something so nebulous. And behold: LLM can pass the turing test. So they think. Can you come up with something better (than the turing test)? |
| |
| ▲ | bigfishrunning 3 days ago | parent | next [-] | | But the Turing test (which I concede, LLMs do pass) doesn't test if some system is thinking; it tests if the system can convince an unbiased observer that it is thinking. I cannot come up with a better "is this thing thinking" test, but that doesn't mean that such a test can't exist; I'm sure there are much smarter people then me trying to solve this problem. | |
| ▲ | 3 days ago | parent | prev [-] | | [deleted] |
|
|
| ▲ | CamperBob2 3 days ago | parent | prev [-] |
| When asked how to falsify their assertion, perhaps by explaining what exactly is "thinking" in the human brain that can and/or will be possible to emulate... ... someone else points out that the same models that can't "think" are somehow turning in gold-level performance at international math and programming competitions, making Fields Medalists sit up and take notice, winning art competitions, composing music indistinguishable from human output, and making entire subreddits fail the Turing test. |
| |
| ▲ | Terr_ 3 days ago | parent | next [-] | | A couple decades of chess programs nods knowingly: "First time?" | | |
| ▲ | CamperBob2 3 days ago | parent [-] | | A couple decades of chess programs nods knowingly: "First time?" Uh huh. Good luck getting Stockfish to do your math homework while Leela works on your next waifu. LLMs play chess poorly. Chess engines do nothing else at all. That's kind of a big difference, wouldn't you say? | | |
| ▲ | ben_w 3 days ago | parent | next [-] | | > That's kind of a big difference, wouldn't you say? To their utility. Not sure if it matters on the question "thinking?"; even if for the debaters "thinking" requires consciousness/qualia (and that varies), there's nothing more than guesses as to where that emerges from. | |
| ▲ | gowld 3 days ago | parent | prev [-] | | Terr_ was agreeing with you and highlighting how old the debate is. | | |
| ▲ | Terr_ 3 days ago | parent [-] | | Highlighting, yes, agreeing, no. For my original earlier reply, the main subtext would be: "Your complaint is ridiculously biased." For the later reply about chess, perhaps: "You're asserting that tricking, amazing, or beating a human is a reliable sign of human-like intelligence. We already know that is untrue from decades of past experience." | | |
| ▲ | CamperBob2 3 days ago | parent [-] | | You're asserting that tricking, amazing, or beating a human is a reliable sign of human-like intelligence. I don't know who's asserting that (other than Alan Turing, I guess); certainly not me. Humans are, if anything, easier to fool than our current crude AI models are. Heck, ELIZA was enough to fool non-specialist humans. In any case, nobody was "tricked" at the IMO. What happened there required legitimate reasoning abilities. The burden of proof falls decisively on those who assert otherwise. |
|
|
|
| |
| ▲ | nutjob2 3 days ago | parent | prev | next [-] | | Computers can perform math and numerous other tasks billions of times faster than humans, whats your point? This is exactly the problem. Claims about AI are unfalsifiable, thus your various non-sequiturs about AI 'thinking'. | |
| ▲ | nh23423fefe 3 days ago | parent | prev [-] | | god of the gaps | | |
| ▲ | CamperBob2 3 days ago | parent [-] | | Exactly. As soon as a model does something it "wasn't supposed to be able to do," two gaps open up on either side. |
|
|