▲ | whilenot-dev a day ago | ||||||||||||||||||||||||||||||||||
Not the author, but to extend this quote from the article: > Its [Large Language Models] ability to write code and summarize text feels like a qualitative leap in generality that the monkey-and-moon analogy doesn't quite capture. This leaves us with a forward-looking question: How do recent advances in multimodality and agentic AI test the boundaries of this fallacy? Does a model that can see and act begin to bridge the gap toward common sense, or is it just a more sophisticated version of the same narrow intelligence? Are world models a true step towards AGI or just a higher branch in a tree of narrow linguistic intelligence? I'd put the expression common sense on the same level as having causal connections, and would also assume that SOTA LLMs do not create an understanding based on causality. AFAICS this is known as the "reversal curse"[0]. | |||||||||||||||||||||||||||||||||||
▲ | adastra22 a day ago | parent [-] | ||||||||||||||||||||||||||||||||||
Common sense is more than just causal reasoning. It is also an ability to draw upon a large database of facts about the world and to know which ones apply to the current situation. But LLMs achieve both your condition and mine. The attention network makes the causal connections that you speak of, while the multi-layer perceptions store and extract facts that respond to the mix of attention. It is not commonly described as such, but I think “common sense engine” is a far better description of what a GPT-based LLM is doing than mere next word prediction. | |||||||||||||||||||||||||||||||||||
|