| ▲ | Chance-Device 12 hours ago | |||||||||||||||||||||||||
And what kind of evidence would convince you? What experiment would ever bridge this gap? You’re relying entirely on similarity between yourself and other humans. This doesn’t extend very well to anything, even animals, though more so than machines. By framing it this way have you baked in the conclusion that nothing else can be conscious on an a priori basis? | ||||||||||||||||||||||||||
| ▲ | staticassertion 2 minutes ago | parent | next [-] | |||||||||||||||||||||||||
There are fields that focus on these areas and numerous ideas around what the criteria would be. One of the common understandings is that recurrent processing is likely a foundational layer for consciousness, and agents do not have this currently. I'd say that in terms of evidence I'd want to establish specific functional criteria that seem related to consciousness and then try to establish those criteria existing in agents. If we can do so, then they're conscious. My layman understanding is that they don't really come close to some of the fairly fundamental assumptions. | ||||||||||||||||||||||||||
| ▲ | suddenlybananas 11 hours ago | parent | prev [-] | |||||||||||||||||||||||||
I'm not sure what evidence would convince me, but I don't think the way LLMs act is convincing enough. The kinds of errors they make and the fact they operate in very clear discrete chunks makes it seem hard to me to attribute them subjective experience. | ||||||||||||||||||||||||||
| ||||||||||||||||||||||||||