| ▲ | jrmg a day ago |
| LLMs are the Chinese Room. They would generate identical output for the same input text every time were it not for artificially introduced randomness (‘heat’). Of course, some would argue the Chinese Room is conscious. |
|
| ▲ | scarmig a day ago | parent | next [-] |
| If you somehow managed to perfectly simulate a human being, they would also act deterministically in response to identical initial conditions (modulo quantum effects, which are insignificant at the neural scale and also apply just as well to transistors). |
| |
| ▲ | elcritch a day ago | parent | next [-] | | It's not entirely infeasible that neurons could harness quantum effects. Not across the neurons as a whole, but via some sort of microstructures or chemical processes [0]. It seems likely that birds harness quantum effects to measure magnetic fields [1]. 0: https://www.sciencealert.com/quantum-entanglement-in-neurons...
1: https://www.scientificamerican.com/article/how-migrating-bir... | |
| ▲ | andrei_says_ a day ago | parent | prev | next [-] | | Doesn’t everything act deterministically if all the forces are understood? Humans included. One can say the notion of free will is an unpacked bundle of near infinite forces emerging in and passing through us. | |
| ▲ | andrei_says_ a day ago | parent | prev | next [-] | | Doesn’t everything act deterministically if all the forces are understood? Humans included. | |
| ▲ | defrost a day ago | parent | prev [-] | | > in response to identical initial conditions precisely, mathematically identical to infinite precision .. "yes". Meanwhile, in the real world we live in it's essentially physically impossible to stage two seperate systems to be identical to such a degree AND it's an important result that some systems, some very simple systems, will have quite different outcomes without that precise degree of impossibly infinitely detailed identical conditions. See: Lorenz's Butterfly and Smale's Horseshoe Map. | | |
| ▲ | scarmig 8 hours ago | parent [-] | | Of course. But that's not relevant to the point I was responding to suggesting that LLMs may lack consciousness because they're deterministic. Chaos wasn't the argument (though that would be a much more interesting one, cf "edge of chaos" literature). |
|
|
|
| ▲ | anorwell a day ago | parent | prev [-] |
| I am arguing (or rather, presenting without argument) that the Chinese room may be conscious, hence calling it a fallacy above. Not that it _is_ conscious, to be clear, but that the Chinese room has done nothing to show that it is not. Hofstadter makes the argument well in GEB and other places. |
| |