| ▲ | emp17344 5 hours ago | ||||||||||||||||||||||||||||||||||||||||
This type of anthropomorphization is a mistake. If nothing else, the takeaway from Moltbook should be that LLMs are not alive and do not have any semblance of consciousness. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | DennisP 5 hours ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||
Consciousness is orthogonal to this. If the AI acts in a way that we would call deceptive, if a human did it, then the AI was deceptive. There's no point in coming up with some other description of the behavior just because it was an AI that did it. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | thomassmith65 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
If a chatbot that can carry on an intelligent conversation about itself doesn't have a 'semblance of consciousness' then the word 'semblance' is meaningless. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | falcor84 4 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
How is that the takeaway? I agree that it's clearly they're not "alive", but if anything, my impression is that there definitely is a strong "semblance of consciousness", and we should be mindful of this semblance getting stronger and stronger, until we may reach a point in a few years where we really don't have any good external way to distinguish between a person and an AI "philosophical zombie". I don't know what the implications of that are, but I really think we shouldn't be dismissive of this semblance. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | fsloth 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
Nobody talked about consciousness. Just that during evaluation the LLM models have ”behaved” in multiple deceptive ways. As an analogue ants do basic medicine like wound treatment and amputation. Not because they are conscious but because that’s their nature. Similarly LLM is a token generation system whose emergent behaviour seems to be deception and dark psychological strategies. | |||||||||||||||||||||||||||||||||||||||||
| ▲ | WarmWash 5 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||||||||
On some level the cope should be that AI does have consciousness, because an unconscious machine deceiving humans is even scarier if you ask me. | |||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||
| ▲ | condiment 5 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||
I agree completely. It's a mistake to anthropomorphize these models, and it is a mistake to permit training models that anthropomorphize themselves. It seriously bothers me when Claude expresses values like "honestly", or says "I understand." The machine is not capable of honesty or understanding. The machine is making incredibly good predictions. One of the things I observed with models locally was that I could set a seed value and get identical responses for identical inputs. This is not something that people see when they're using commercial products, but it's the strongest evidence I've found for communicating the fact that these are simply deterministic algorithms. | |||||||||||||||||||||||||||||||||||||||||