| ▲ | razorbeamz 6 hours ago |
| The point I'm trying to make is that all LLM output is based on likelihood of one word coming after the next word based on the prompt. That is literally all it's doing. It's not "thinking." It's not "solving." It's simply stringing words together in a way that appears most likely. ChatGPT cannot do math. It can only string together words and numbers in a way that can convince an outsider that it can do math. It's a parlor trick, like Clever Hans [1]. A very impressive parlor trick that is very convincing to people who are not familiar with what it's doing, but a parlor trick nontheless. [1] https://en.wikipedia.org/wiki/Clever_Hans |
|
| ▲ | trick-or-treat 5 hours ago | parent | next [-] |
| > all LLM output is based on likelihood of one word coming after the next word based on the prompt. Right but it has to reason about what that next word should be. It has to model the problem and then consider ways to approach it. |
| |
| ▲ | razorbeamz 5 hours ago | parent [-] | | No, it does not reason anything. LLM "reasoning" is just an illusion. When an LLM is "reasoning" it's just feeding its own output back into itself and giving it another go. | | |
| ▲ | fenomas 5 hours ago | parent | next [-] | | This is like saying chess engines don't actually "play" chess, even though they trounce grandmasters. It's a meaningless distinction, about words (think, reason, ..) that have no firm definitions. | | |
| ▲ | trick-or-treat 4 hours ago | parent | next [-] | | This exactly. The proof is in the pudding. If AI pudding is as good as (or better than) human pudding, and you continue to complain about it anyway... You're just being biased and unreasonable. And by the way, I don't think it's surprising that so many people are being unreasonable on this issue, there is a lot at stake and it's implications are transformative. | |
| ▲ | razorbeamz 4 hours ago | parent | prev [-] | | Chess engines are not a comparable thing. Chess is a solved game. There is always a mathematically perfect move. | | |
| ▲ | trick-or-treat 3 hours ago | parent | next [-] | | > Chess is a solved game. There is always a mathematically perfect move. This is a good example of being confidently misinformed. The best move is always a result of calculation. And the calculation can always go deeper or run on a stronger engine. | |
| ▲ | Scarblac 3 hours ago | parent | prev | next [-] | | We know that chess can be solved, in theory. It absolutely isn't and probably will never be in practice. The necessary time and storage space doesn't exist. | |
| ▲ | sincerely 3 hours ago | parent | prev [-] | | Chess is absolutely not a solved game, outside of very limited situations like endgames. Just because a best move exists does not mean we (or even an engine) know what it is |
|
| |
| ▲ | Scarblac 3 hours ago | parent | prev [-] | | Is that so different from brains? Even if it is, this sounds like "this submarine doesn't actually swim" reasoning. |
|
|
|
| ▲ | brenschluss 5 hours ago | parent | prev [-] |
| sigh; this argument is the new Chinese Room; easily described, utterly wrong. https://www.youtube.com/watch?v=YEUclZdj_Sc |
| |
| ▲ | gpderetta 17 minutes ago | parent | next [-] | | After dismissing it for a long time, I have come around to the philosophical zombie argument. I do not believe that LLMs are conscious, but I also no longer believe that consciousness is a prerequisite for intelligence. I think at this point it is hard to deny that LLMs do not possess some form of intelligence (although not necessarily human-like). I think P-zombies is a fitting description. | |
| ▲ | razorbeamz 5 hours ago | parent | prev [-] | | Next-token-prediction cannot do calculations. That is fundamental. It can produce outputs that resemble calculations. It can prompt an agent to input some numbers into a separate program that will do calculations for it and then return them as a prompt. Neither of these are calculations. | | |
| ▲ | gf000 3 hours ago | parent | next [-] | | So you don't think 50T parameter
neural networks can encode the logic for adding two n-bit integers for reasonably sized integers? That would be pretty sad. | | |
| ▲ | razorbeamz 3 hours ago | parent [-] | | They do not. The fundamental technology behind LLMs does not allow that to be the case. You are hoping that an LLM can do something that it cannot do. | | |
| ▲ | gf000 2 hours ago | parent [-] | | https://arxiv.org/html/2502.16763v2 You are wrong. Especially that we are talking about models with 50T parameters. Can they do arbitrary computations for arbitrarily long numbers? Nope. But that's not remotely the same statement, and they can trivially call out to tools to do that in those cases. |
|
| |
| ▲ | parasubvert 5 hours ago | parent | prev [-] | | Humans can't do calculations either, by your definition. Only computers can. | | |
| ▲ | datsci_est_2015 2 hours ago | parent [-] | | Third things can exist. In other words, you’re implying a false dichotomy between “human computation” and “computer computation” and implying that LLMs must be one or the other. A pithy gotcha comment, no doubt. Edit: the implication comes from demanding that the OP’s definition must be rigorous enough to cover all models of “computation”, and by failing to do so, it means that LLMs must be more like humans than computers. |
|
|
|