▲ | sharts 3 hours ago | |||||||
I remember the guy saying that disembodied AI couldn’t possibly understand meaning. We see this now with LLMs. They just generate text. They get more accurate over time. But how can they understand a concept such as “soft” or “sharp” without actual sensory data with which to understand the concept and varying degrees of “softness” or “sharpness.” The fact is that they can’t. Humans aren’t symbol manipulation machines. They are metaphor machines. And metaphors we care about require a physical basis on one side of that comparison to have any real fundamental understanding of the other side. Yes, you can approach human intelligence almost perfectly with AI software. But that’s not consciousness. There is no first person subjective experience there to give rise to mental features. | ||||||||
▲ | lostmsu 2 hours ago | parent [-] | |||||||
> I remember the guy saying that disembodied AI couldn’t possibly understand meaning. This is not a theory (or is one, but false) according to Popper as far as I understand, because the only way to check understanding that I know of is to ask questions, and LLMs passes it. So in order to satisfy falsifiability another test must be devised. | ||||||||
|