Remix.run Logo
emp17344 5 hours ago

This type of anthropomorphization is a mistake. If nothing else, the takeaway from Moltbook should be that LLMs are not alive and do not have any semblance of consciousness.

DennisP 5 hours ago | parent | next [-]

Consciousness is orthogonal to this. If the AI acts in a way that we would call deceptive, if a human did it, then the AI was deceptive. There's no point in coming up with some other description of the behavior just because it was an AI that did it.

emp17344 5 hours ago | parent [-]

Sure, but Moltbook demonstrates that AI models do not engage in truly coordinated behavior. They simply do not behave the way real humans do on social media sites - the actual behavior can be differentiated.

DennisP 2 hours ago | parent | next [-]

"Coordinated" and "deceptive" are orthogonal concepts as well. If AIs are acting in a way that's not coordinated, then of course, don't say they're coordinating.

AIs today can replicate some human behaviors, and not others. If we want to discuss which things they do and which they don't, then it'll be easiest if we use the common words for those behaviors even when we're talking about AI.

falcor84 4 hours ago | parent | prev [-]

But that's how ML works - as long as the output can be differentiated, we can utilize gradient descent to optimize the difference away. Eventually, the difference will be imperceptible.

And of course that brings me back to my favorite xkcd - https://xkcd.com/810/

emp17344 4 hours ago | parent [-]

Gradient descent is not a magic wand that makes computers behave like anything you want. The difference is still quite perceptible after several years and trillions of dollars in R&D, and there’s no reason to believe it’ll get much better.

falcor84 2 hours ago | parent [-]

Really, there's "no reason"? For me, watching ML gradually get better at every single benchmark thrown against it is quite a good reason. At this stage, the burden of proof is clearly on those who say it'll stop improving.

thomassmith65 5 hours ago | parent | prev | next [-]

If a chatbot that can carry on an intelligent conversation about itself doesn't have a 'semblance of consciousness' then the word 'semblance' is meaningless.

emp17344 5 hours ago | parent | next [-]

Would you say the same about ELIZA?

Moltbook demonstrates that AI models simply do not engage in behavior analogous to human behavior. Compare Moltbook to Reddit and the difference should be obvious.

shimman 5 hours ago | parent | prev [-]

Yes, when your priors are not being confirmed the best course of action is to denounce the very thing itself. Nothing wrong with that logic!

falcor84 4 hours ago | parent | prev | next [-]

How is that the takeaway? I agree that it's clearly they're not "alive", but if anything, my impression is that there definitely is a strong "semblance of consciousness", and we should be mindful of this semblance getting stronger and stronger, until we may reach a point in a few years where we really don't have any good external way to distinguish between a person and an AI "philosophical zombie".

I don't know what the implications of that are, but I really think we shouldn't be dismissive of this semblance.

fsloth 5 hours ago | parent | prev | next [-]

Nobody talked about consciousness. Just that during evaluation the LLM models have ”behaved” in multiple deceptive ways.

As an analogue ants do basic medicine like wound treatment and amputation. Not because they are conscious but because that’s their nature.

Similarly LLM is a token generation system whose emergent behaviour seems to be deception and dark psychological strategies.

WarmWash 5 hours ago | parent | prev | next [-]

On some level the cope should be that AI does have consciousness, because an unconscious machine deceiving humans is even scarier if you ask me.

emp17344 5 hours ago | parent [-]

An unconscious machine + billions of dollars in marketing with the sole purpose of making people believe these things are alive.

condiment 5 hours ago | parent | prev [-]

I agree completely. It's a mistake to anthropomorphize these models, and it is a mistake to permit training models that anthropomorphize themselves. It seriously bothers me when Claude expresses values like "honestly", or says "I understand." The machine is not capable of honesty or understanding. The machine is making incredibly good predictions.

One of the things I observed with models locally was that I could set a seed value and get identical responses for identical inputs. This is not something that people see when they're using commercial products, but it's the strongest evidence I've found for communicating the fact that these are simply deterministic algorithms.