Remix.run Logo
adastra22 a day ago

Common sense is more than just causal reasoning. It is also an ability to draw upon a large database of facts about the world and to know which ones apply to the current situation.

But LLMs achieve both your condition and mine. The attention network makes the causal connections that you speak of, while the multi-layer perceptions store and extract facts that respond to the mix of attention.

It is not commonly described as such, but I think “common sense engine” is a far better description of what a GPT-based LLM is doing than mere next word prediction.

whilenot-dev a day ago | parent [-]

> But LLMs achieve both your condition and mine.

Just to follow: Are you suggesting that Andrej Karpathy is wrong when he talks about the behaviors of ChatGPT (GPT-4), or is GPT-5 just way more SOTA advanced and solved the "reversal curse" of GPT-4?

adastra22 21 hours ago | parent [-]

Well, what does Andrej Karpathy say? Kinda hard to respond without knowing that :)

What I said was true of GPT-2, and much more clearly the case with GPT-3. Unfortunately us plebs don’t have as good insight into later models.

whilenot-dev 20 hours ago | parent [-]

Just listen to 45secs of the video I linked above if you're interested.

adastra22 20 hours ago | parent [-]

That is how human memories work too, though. It is well documented in the psychological literature that human memory is not the bidirectional mapping or graph you might expect from computer analogies. Associative memory in the mind is unidirectional and content addressable, which results in odd examples very similar to this "reversal curse."

We shouldn't strive for our AI to be bug-for-bug compatible with human thinking. But I fail to see how AI having similar limitations to human brains serves as evidence that they DON'T serve similar functions.