▲ | zahlman 3 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
> Only if you redefine "reasoning". This is something that the generative AI industry has succeeded in convincing many people of, but that doesn't mean everyone has to accede to that change. I agree. However, they can clearly do a reasonable facsimile of many things that we previously believed required reasoning to do acceptably. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | quesera 3 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Right -- we know that LLMs cannot think, feel, or understand. Therefore whenever they produce output that looks like the result of those things, we must either be deceived by a reasonable facsimile, or we simply misapprehended their necessity in the first place. But, do we understand the human brain as well as we understand LLMs? Obviously there's something different, but is it just a matter of degrees? LLMs have greater memory than humans, and lesser ability to correlate it. Correlation is powerful magic. That's pattern matching though, and I don't see a fundamental reason why LLMs won't get better at it. Maybe never as good as (smart) humans are, but with their superior memory, maybe that will often be adequate. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|