| ▲ | PaulDavisThe1st 7 hours ago | |||||||||||||||||||||||||||||||||||||||||||
The problem with all this is that we don't actually know what human cognition is doing either. We know what our experience is - thinking about concepts and then translating that into language - but we really don't know with much confidence what is actually going on. I lean strongly toward the idea that humans are doing something quite different than LLMs, particularly when reasoning. But I want to leave the door open to the idea that we've not understood human cognition, mostly because our primary evidence there comes from our own subjective experience, which may (or may not) provide a reliable guide to what is actually happening. | ||||||||||||||||||||||||||||||||||||||||||||
| ▲ | viccis 7 hours ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||
>The problem with all this is that we don't actually know what human cognition is doing either. We do know what it's not doing, and that is operating only through reproducing linguistic patterns. There's no more cause to think LLMs approximate our thought (thought being something they are incapable of) than that Naive-Bayes spam filter models approximate our thought. | ||||||||||||||||||||||||||||||||||||||||||||
| ||||||||||||||||||||||||||||||||||||||||||||