▲ | adastra22 a day ago | |||||||||||||||||||||||||
Common sense is more than just causal reasoning. It is also an ability to draw upon a large database of facts about the world and to know which ones apply to the current situation. But LLMs achieve both your condition and mine. The attention network makes the causal connections that you speak of, while the multi-layer perceptions store and extract facts that respond to the mix of attention. It is not commonly described as such, but I think “common sense engine” is a far better description of what a GPT-based LLM is doing than mere next word prediction. | ||||||||||||||||||||||||||
▲ | whilenot-dev a day ago | parent [-] | |||||||||||||||||||||||||
> But LLMs achieve both your condition and mine. Just to follow: Are you suggesting that Andrej Karpathy is wrong when he talks about the behaviors of ChatGPT (GPT-4), or is GPT-5 just way more SOTA advanced and solved the "reversal curse" of GPT-4? | ||||||||||||||||||||||||||
|