Remix.run Logo
whilenot-dev a day ago

Not the author, but to extend this quote from the article:

> Its [Large Language Models] ability to write code and summarize text feels like a qualitative leap in generality that the monkey-and-moon analogy doesn't quite capture. This leaves us with a forward-looking question: How do recent advances in multimodality and agentic AI test the boundaries of this fallacy? Does a model that can see and act begin to bridge the gap toward common sense, or is it just a more sophisticated version of the same narrow intelligence? Are world models a true step towards AGI or just a higher branch in a tree of narrow linguistic intelligence?

I'd put the expression common sense on the same level as having causal connections, and would also assume that SOTA LLMs do not create an understanding based on causality. AFAICS this is known as the "reversal curse"[0].

[0]: https://youtu.be/zjkBMFhNj_g?t=750

adastra22 a day ago | parent [-]

Common sense is more than just causal reasoning. It is also an ability to draw upon a large database of facts about the world and to know which ones apply to the current situation.

But LLMs achieve both your condition and mine. The attention network makes the causal connections that you speak of, while the multi-layer perceptions store and extract facts that respond to the mix of attention.

It is not commonly described as such, but I think “common sense engine” is a far better description of what a GPT-based LLM is doing than mere next word prediction.

whilenot-dev a day ago | parent [-]

> But LLMs achieve both your condition and mine.

Just to follow: Are you suggesting that Andrej Karpathy is wrong when he talks about the behaviors of ChatGPT (GPT-4), or is GPT-5 just way more SOTA advanced and solved the "reversal curse" of GPT-4?

adastra22 21 hours ago | parent [-]

Well, what does Andrej Karpathy say? Kinda hard to respond without knowing that :)

What I said was true of GPT-2, and much more clearly the case with GPT-3. Unfortunately us plebs don’t have as good insight into later models.

whilenot-dev 20 hours ago | parent [-]

Just listen to 45secs of the video I linked above if you're interested.

adastra22 20 hours ago | parent [-]

That is how human memories work too, though. It is well documented in the psychological literature that human memory is not the bidirectional mapping or graph you might expect from computer analogies. Associative memory in the mind is unidirectional and content addressable, which results in odd examples very similar to this "reversal curse."

We shouldn't strive for our AI to be bug-for-bug compatible with human thinking. But I fail to see how AI having similar limitations to human brains serves as evidence that they DON'T serve similar functions.