Remix.run Logo
Lerc 3 days ago

If you pick any well performing AI architecture, what would lead you to believe that they are not capable of having a rich internal cognitive representation?

The Transformer, well... transforms, at each layer to produce a different representation of the context. What is this but an internal representation? One cannot assess whether that is rich or cognitive without some agreement of what those terms might mean.

LLMs can seemingly convert a variety of languages into an internal representation that encompasses the gist of any of them. This would at least provide a decent argument that the internal representation is 'rich'

As for cognitive? What assessment would you have in mind that would clearly disqualify something as a non-cognitive entity?

I think most people working in this field who are confident feel that they can extend what they know now to make something that looks like a duck, walks like a duck, and quacks like a duck. If that is achieved, on what basis does anyone have to say "But it's not really a duck"?

I'm ok with people saying AI will be never able to perform that well because it doesn't have X, as long as they accept that if it does, one day, perform that well they accept that either X is present, or that X is not relevant.

arolihas 2 days ago | parent | next [-]

If you think we're only our observable behaviors or that is the only relevant thing to you then I don't think it's worth getting into this argument. Consider this excerpt from https://scottaaronson.blog/?p=7094#comment-1947377

> Most animals are goal-directed, intentional, sensory-motor agents who grow interior representations of their environments during their lifetime which enables them to successfully navigate their environments. They are responsive to reasons their environments affords for action, because they can reason from their desires and beliefs towards actions.

In addition, animals like people, have complex representational abilities where we can reify the sensory-motor “concepts” which we develop as “abstract concepts” and give them symbolic representations which can then be communicated. We communicate because we have the capacity to form such representations, translate them symbolically, and use those symbols “on the right occasions” when we have the relevant mental states.

(Discrete mathematicians seem to have imparted a magical property to these symbols that *in them* is everything… no, when I use words its to represent my interior states… the words are symptoms, their patterns are coincidental and useful, but not where anything important lies).

In other words, we say “I like ice-cream” because: we are able to like things (desire, preference), we have tasted ice-cream, we have reflected on our preferences (via a capacity for self-modelling and self-directed emotional awareness), and so on. And when we say, “I like ice-cream” it’s *because* all of those things come together in radically complex ways to actually put us in a position to speak truthfully about ourselves. We really do like ice-cream.

Lerc 2 days ago | parent [-]

> And when we say, “I like ice-cream” it’s because all of those things come together in radically complex ways to actually put us in a position to speak truthfully about ourselves. We really do like ice-cream.

Ok, now prove this is true. Can you do so without invoking unobservable properties? If you can, then observable is all that matters, if you cannot then you have no proof.

arolihas 2 days ago | parent [-]

Do I seriously have to prove to you that you like ice cream? Have you tried it? If you sincerely believe you are a husk whose language generation is equivalent to some linear algebra then why even engage in a conversation with me? Why should I waste my time proving to you a human that you have a human experience if you don’t believe it yourself?

Lerc 2 days ago | parent [-]

You don't need to prove to me that I like ice cream. You need to prove to me that you like ice cream. That you even have the capacity to like. Asserting that you have those experiences proves nothing since even a simple basic program 10 print "I like Ice Cream" can do that.

How can you reliably deny the presence of an experience of another if you cannot prove that experience in yourself?

arolihas 2 days ago | parent [-]

I actually don’t need to prove to you that I’m more than a BASIC program. I mean listen to yourself. You simply don’t live in the real world. If your mom died and we replaced her with a program that printed a bunch of statements that were designed to as closely mimic your conversations with her as much as possible you wouldn’t argue hey this program is just like my mom. But hey maybe you wouldn’t be able to tell the difference behind the curtain so actually it might as well asbe the same thing in your view, right? I mean who are we to deny that mombot is just like your mom via an emergent pattern somewhere deep inside the matrices in an unprovable way /s. Just because I can’t solve the philosophical zombie problem for you at your whim to your rigor doesn’t mean a chatbot has some equivalent internal experience.

Lerc 2 days ago | parent [-]

I'm not claiming that any particular chatbot has an equivalent experience, I'm claiming there is no basis beyond its behaviour that it does not.

With the duplicate mother problem, if you cannot tell then there is no reason to believe that it is not a being of equivalent nature. That is not the same as identity, for a layman approach to that viewpoint, see Star Trek: TNG, Season 6, Episode 24. A duplicate Will Riker is created but is still a distinct entity (and one might argue, more original since has been transported one fewer times). Acting the same as is not the same as being the same entity. Nevertheless it has no bearing on whether the duplicate is a valid entity in its own right.

You feel like I'm not living in the real world, but I am the one asking what basis we have for knowing things. You are relying on the presumption that the world reflects what you believe it to be. Epistemology is all about idetifying exactly how much we know about the world.

arolihas a day ago | parent [-]

Ok you can have your radically skeptic hard materialist rhetoric. I just don’t take it seriously, and I don’t think you do either. It’s like those people who insist there is no free will and yet go about their day clearly making choices and exercising their free will. If you want to say technically everyone might as well be a philosophical zombie just reacting to things and your internal subjective experience is an illusory phenomenon, fine you can say that as much as you want. In turn I’ll just give up here because you don’t even have a mind that could be changed. I can sit here and claim you’re the equivalent of a void that repeats radically skeptic lines at me. Maybe a sophisticated chatbot or program. Or maybe you’re actually a hallucination since I can’t prove anything really exists outside of my sense. In which case I’m really wasting my time here.

Lerc 15 hours ago | parent [-]

Well I'm a combatibilist, So I certainly believe in free will. I also will accept any entity that consistently acts as if it has a will that it does actually have that. That's has always been my point, treating things as what they appear to be is the only rational approach when you cannot prove or disprove the existence of the property in question.

It follows from that that you cannot exclude something that appears to have a property if you cannot prove it doesn't or even prove it if it does.

arolihas 10 hours ago | parent [-]

Fair enough. I wouldn’t say a program is acting with a will of its own just because it’s trained to respond to questions in a human like way. That doesn't even say anything about its capacity to have a will. Language is a tool that can convey internal state, not the thing itself.

keiferski 3 days ago | parent | prev [-]

This is basically the Turing test, and like the Turing test it undervalues other elements which can allow for differentiation between “real” and “fake” things. For example - if we can determine that a thing that looks, walks, and quacks like a duck, but doesn’t have the biological heritage markers (that we can easily determine) then it won’t be treated as equivalent to a duck. The social desire to differentiate between real and fake exists and is easily implementable.

In other words: if AIs/robots ever become so advanced that they look, walk, and talk like people, I expect there to be a system which easily determines if the subject has a biological origin or not.

This is way down the line, but in a closer future this will probably just look like verifying someone’s real world identity as a part of the social media account creation process. The alternative is that billion dollar corporations like Meta or YouTube just let their platforms become overrun with AI slop. I don’t expect them to sit on their hands and do nothing.