▲ | atemerev 3 days ago | |
I am not implying that LLMs are conscious or something. Just that they can reason, i.e. draw logical conclusions from observations (or, in their case, textual inputs), and make generalizations. This is a much weaker requirement. Chess engines can reason about chess (they can even explain their reasoning). LLMs can reason about many other things, with varied efficiency. What everyone is currently trying to build is something like AlphaZero (adversarial self-improvement for superhuman performance) with the state space of LLMs (general enough to be useful for most tasks). When we’ll have this, we’ll have AGI. |