| ▲ | lapcat 4 days ago | ||||||||||||||||||||||||||||||||||||||||
> LLMs might be decent models of Level 1 I don't think so. Fast speakers and hyponotized people are still clearly conscious and "at home" inside, vastly more "human" than any LLM. Deliberation and evaluation imply thinking before you speak but do not imply that you can't otherwise think while you speak. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | adamzwasserman 4 days ago | parent [-] | ||||||||||||||||||||||||||||||||||||||||
The body of knowledge on Ericksonian hypnotherapy is pretty clear that the effect of language on Level 1 is orthogonal to, and sometimes even opposed to, conscious processes. I became interested after being medically hypnotized for kidney stone pain. As the hypnotist spoke, I was consciously thinking: "this is dumb, it will never work." And yet it did. That's exactly your point — I was fully conscious and "at home" the whole time, yet something was processing and acting on the language independently. The question is whether that something shares any computational properties with LLMs, not whether the whole system does. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||