▲ | sourcepluck 4 days ago | |
I guess you don't follow TCEC, or computer chess generally[0]. Chess engines have been _playing chess_ at superhuman levels using neural networks for years now, it was a revolution in the space. AlphaZero, Lc0, Stockfish NNUE. I don't recall yards of commentary arguing that they were reasoning. Look, you can put as many underscores as you like, the question of whether these machines are really reasoning or emulating reason is not a solved problem. We don't know what reasoning is! We don't know if we are really reasoning, because we have major unresolved questions regarding the mind and consciousness[1]. These may not be intractable problems either, there's reason for hope. In particular, studying brains with more precision is obviously exciting there. More computational experiments, including the recent explosion in LLM research, is also great. Still, reflexively believing in the computational theory of the mind[2] without engaging in the actual difficulty of those questions, though commonplace, is not reasonable. [0] Jozarov on YT has great commentary of top engine games, worth checking out. | ||
▲ | atemerev 3 days ago | parent [-] | |
I am not implying that LLMs are conscious or something. Just that they can reason, i.e. draw logical conclusions from observations (or, in their case, textual inputs), and make generalizations. This is a much weaker requirement. Chess engines can reason about chess (they can even explain their reasoning). LLMs can reason about many other things, with varied efficiency. What everyone is currently trying to build is something like AlphaZero (adversarial self-improvement for superhuman performance) with the state space of LLMs (general enough to be useful for most tasks). When we’ll have this, we’ll have AGI. |