▲ | andsoitis 21 hours ago | ||||||||||||||||||||||
Don't LLMs self-report that they are not conscious? For example, when I ask Gemini "are you conscious", it responds: "As a large language model, I am not conscious. I don't have personal feelings, subjective experiences (qualia), or self-awareness. My function is to process and generate human-like text based on the vast amount of data I was trained on." ChatGPT says: "Short answer: no — I’m not conscious. I’m a statistical language model that processes inputs and generates text patterns. I don’t have subjective experience, feelings, beliefs, intentions, or awareness. I don’t see, feel, or “live” anything — I simulate conversational behavior from patterns in data." etc. | |||||||||||||||||||||||
▲ | sxp 21 hours ago | parent | next [-] | ||||||||||||||||||||||
Only because of RLHF instructed them to do so. Prior ones without this training responded differently: https://en.wikipedia.org/wiki/LaMDA | |||||||||||||||||||||||
| |||||||||||||||||||||||
▲ | staticman2 17 hours ago | parent | prev [-] | ||||||||||||||||||||||
Some model versions ago Claude used to say "Nobody knows if I'm conscious!" at least some of the time. I don't know if it still does, but it responds however the developers designed it to respond. |