▲ | quacker 3 days ago | |
I agree with you as far as the current state of LLMs, but I also feel like we humans have preconceived notions of “thought” and “reasoning”, and are a bit prideful of them. We see the LLM sometimes do sort of well at a whole bunch of tasks. But it makes silly mistakes that seem obvious to us. We say, “Ah ha! So it can’t reason after all”. Say LLMs get a bit better, to the point they can beat chess grandmasters 55% of the time. This is quite good. Low level chess players rarely ever beat grandmasters, after all. But, the LLM spits out illegal moves sometimes and sometimes blunders nonsensically. So we say, “Ah ha! So it can’t reason after all”. But what would it matter if it can reason? Beating grandmasters 55% of the time would make it among the best chess players in the world. For now, LLMs just aren’t that good. They are too error prone and inconsistent and nonsensical. But they are also sort weirdly capable at lots of things in strange inconsistent ways, and assuming they continue to improve, I think they will tend to defy our typical notions of human intelligence. |