Remix.run Logo
sharts 3 hours ago

I remember the guy saying that disembodied AI couldn’t possibly understand meaning.

We see this now with LLMs. They just generate text. They get more accurate over time. But how can they understand a concept such as “soft” or “sharp” without actual sensory data with which to understand the concept and varying degrees of “softness” or “sharpness.”

The fact is that they can’t.

Humans aren’t symbol manipulation machines. They are metaphor machines. And metaphors we care about require a physical basis on one side of that comparison to have any real fundamental understanding of the other side.

Yes, you can approach human intelligence almost perfectly with AI software. But that’s not consciousness. There is no first person subjective experience there to give rise to mental features.

lostmsu 2 hours ago | parent [-]

> I remember the guy saying that disembodied AI couldn’t possibly understand meaning.

This is not a theory (or is one, but false) according to Popper as far as I understand, because the only way to check understanding that I know of is to ask questions, and LLMs passes it. So in order to satisfy falsifiability another test must be devised.

pegasus 17 minutes ago | parent [-]

I think the claim would be that an LLM would only ever pass a strict subset of the questions testing a particular understanding. As we gather more and more text to feed these models, finding those questions will necessarily require more and more out-of-the-box thinking... or a (un)lucky draw. Giveaways will always be lurking just beyond the inference horizon, ready to yet again deflate our high hopes of having finally created a machine which actually understands our everyday world.

I find this thesis very plausible. LLMs inhabit the world of language, not our human everyday world, so their understanding of it will always be second-hand. An approximation of our own, itself imperfect understanding of that world.