| ▲ | captainbland 16 hours ago | ||||||||||||||||
I hadn't until you mentioned it but now I have! I expect one day they'll generate a language model on one and then we can just ask it, assuming they don't give it a special rule about never describing its experiences. | |||||||||||||||||
| ▲ | wizzwizz4 15 hours ago | parent [-] | ||||||||||||||||
The language model's output would be informed by its weights, not by its experiences as wetware. Substrate does not make a computation special: that's the whole point of the Chinese Room thought experiment. What mechanism are you imagining that would allow a LLM built of neurons to describe what it's like to be made of neurons, when a LLM built of GPUs cannot describe what it's like to be organised sand? The LLM in the GPU cluster is evaluated by performing the same calculations that could be performed by intricate clockwork, or very very slowly by generations of monks using pencil and paper. Just as the monks have thoughts and feelings, it is conceivable (though perhaps impossible) that the brain tissue implementing a LLM has conscious experience; but if so, that experience would not be reflected in the LLM's output. | |||||||||||||||||
| |||||||||||||||||