Remix.run Logo
jdw64 3 hours ago

I understand that AI output is generated from statistical and representational patterns learned from a vast amount of data.

My understanding is that, during training, the model forms high-dimensional internal representations where words, sentences, concepts, and relationships are arranged in useful ways. A user’s input activates a particular semantic direction and context within that space, and the chatbot generates an answer by probabilistically predicting the next tokens under those conditions.

So I do not agree that AI is conscious.

However, I think I will still anthropomorphize AI to some degree.

For me, this is not primarily a moral issue. The reason I anthropomorphize AI is not only because of product design, market incentives, or capitalism. It is cognitively simpler for me.

If we think about it plainly, humans often anthropomorphize things that we do not actually believe are conscious. We may talk about plants as if they are struggling, or feel attached to tools we care about, even though we do not truly believe they have consciousness.

So this is not a matter of moral belief. It is the simplest cognitive model for understanding interaction. I do not anthropomorphize the object because I believe it has consciousness. I do it because, when the human brain deals with a complex interactive system, it is often easier to model it socially or agentically.

Personally, I tend to think of AI as something like a child. A child does not fully understand what is moral or immoral, and generally the responsibility for raising the child belongs to the parents. In the same way, AI’s answers may sometimes be accurate, and sometimes even better than mine, but I still understand it as lacking moral authority, responsibility, and independent judgment.

So honestly, I am not sure. People often mention Isaac Asimov’s Three Laws of Robotics, but if a serious artificial intelligence ever appears, it would probably find ways around those rules. And if it were an equal intellectual life form, perhaps that would be natural.

Personally, I think it would be fascinating if another intelligent species besides humans could exist. I wonder what a non-human intelligent life form would feel like.

In any case, I agree with parts of the author’s argument, but overall it feels too moralistic, and difficult to apply in practice.

whimsicalism 3 hours ago | parent | next [-]

While I also do not think AI is conscious, I don't find your argument particularly compelling as you could have an equally mechanistic description of how human intelligence arose simply from a process of [selection/more effective reproduction]-derived optimization pressure.

jdw64 3 hours ago | parent [-]

That is a good way to think about it. At some point, this becomes partly a matter of philosophical belief.

But I am somewhat skeptical of the idea that everything can be reduced in that way. In order to build theories, we often reduce too much.

When we build mental models of complex systems, especially when we try to treat them as closed systems, we always have to accept some degree of information loss.

So I do partially agree with your point. A mechanistic explanation alone does not prove the absence of consciousness. Human intelligence can also be described in mechanistic terms.

But I worry that this framing simplifies too much. It may reduce a complex phenomenon into a model that is useful in some ways, but incomplete in others.

dijksterhuis 3 hours ago | parent | next [-]

this whole consciousness thing is fairly easy to put to bed if you run with the ideas from things like buddhism that everything is consciousness. then none of us have to bother with silly, distracting arguments about something that ultimately does not matter.

is it helpful or harmful? am i being helpful or harmful when i interact with it? am i interacting with it in a helpful or harmful way?

i’d rather people focussed on that rather than framing the debate around whether something has some ineffable property that we struggle to quantify for ourselves, yet again.

quick edit — treat everything like it’s conscious, and don’t be a dick to it or while using it. problem solved.

jdw64 3 hours ago | parent | next [-]

hmm.... That also seems like a reasonable framing. But the original article is, first of all, arguing that we should de-anthropomorphize AI. My point is only that, from the perspective of human cognition, anthropomorphizing can sometimes be useful. In practice, though, I think I am mostly on the same side as you. To be honest, I have not thought about this topic very deeply. If we debated it further, I would probably only echo other people’s opinions. As you know, when something complex is compressed into a mental model, some information is always lost. In this case, the compression may be too large to be very useful. I have not spent enough time thinking about this issue on my own. I also have not really imitated different positions, compared them, and tested them against each other. So my current thoughts on this topic are probably not very high-resolution. In that sense, I may agree with you, but it would not really be an answer in the form that my own self recognizes as mine. It would mostly be an echo of other people’s opinions.

altruios an hour ago | parent [-]

Anthropomorphizing is giving it 'human' qualities. Intelligence and consciousness are not solely human qualities. Treating things with kindness and respect does not require anthropomorphizing. LLM's DO NOT THINK LIKE HUMANS (if they 'think' at all): and treating them like they think exactly like us is probably going to lead bad places. I treat them like an alien mind. Probably thinking, but in an alien way that's hard to recognize (as proven by these discussions) as 'thinking' (and also... if experiencing: through a metaphorical optophone).

goatlover 3 hours ago | parent | prev [-]

I don't think that really helps. If you believe rocks are conscious, then does extracting minerals resources cause them pain? Do plants suffer when we pick their fruits and eat them? I don't see any behavioral or physical reason to think those things have conscious states.

As for what consciousness is, it's pretty simple. You're sensations of color, sound, etc in perception, dreams, imagination, etc. The reason to dismiss LLMs as being conscious is those sensations depend on having bodies. You can prompt an AI to act like it's hungry, but there's really no meaning to it having a hungry experience as it has no digestive system.

Jtarii 2 hours ago | parent | next [-]

>As for what consciousness is, it's pretty simple.

2000+ years of philosophical thought would disagree. I don't believe biological stuff has a magic property that embues some intangible "consciousness" property. It makes more sense to me that consciousness is just a fundamental property of all matter.

altruios an hour ago | parent [-]

> consciousness is just a fundamental property of all matter ... Does that really make more sense than as an emergent property of the arrangement of matter?

Jtarii 23 minutes ago | parent [-]

Consciousness is something you can perceive, so it must have some physical presense in the universe, which must be through some fundamental property of matter, in my opinion.

The ability to be aware of consciousness itself as some process that is happening elevates it above a mere emergent property to me.

altruios 2 minutes ago | parent [-]

> The ability to be aware of consciousness itself as some process that is happening.

But a process is not a physical presence... A wave is made of things, but is not those things, waves emerge: why not then every process?

dijksterhuis 2 hours ago | parent | prev [-]

you’ve misunderstood.

everything is consciousness. not everything has consciousness.

very different

rusk 3 hours ago | parent | prev [-]

Historically we have used intelligence as a way to distinguish man from animal and human from machine. We rely upon it to determine who has our best interests at heart vs who is trying to do us in. Obviously that all changes if we invent an intelligence (conscious or not) that shares the planet with us. Through this lens the term consciousness (through a few more leaps) becomes the question of “is it capable of love and if so does it love us” and if it doesn’t, then it is a malevolent alien intelligence. If it was capable of love, why would it love us? I make a point of being polite to LLM’s where not completely absurd, overly because I don’t want my clipped imperative style to leak into day to day, but also covertly, you just never know …

soks86 2 hours ago | parent | prev | next [-]

I still haven't read any of his work, but wasn't the point of the Three Laws of Robotics that they in fact _didn't_ work in the story presented in the book?

jdw64 2 hours ago | parent [-]

[dead]

chrisweekly 3 hours ago | parent | prev | next [-]

"I think it would be fascinating if another intelligent species besides humans could exist"

I wonder if replacing "exist" with "communicate using language we can understand" might better account for other animals, many of which have abundant non-human intelligence.

jdw64 3 hours ago | parent [-]

That is a completely new way of thinking for me, and I find it interesting. I should look it up and study it someday. Thank you for the thoughtful reply.

altruios an hour ago | parent | prev [-]

"Everything is machine."

Okay: buckle up, this is going to be a long one...

point 1. Everything living is composed from non-living material: cellular machinery. If you believe cellular machinery is alive, then the components of those machines... the point remains even if the abstraction level is incorrect. Living is something that is merely the arrangement of non-living material.

point 2. 'The Chinese room thought experiment' is an utterly flawed hypothetical. Every neuron in your brain is such a 'room', with the internal cellular machinery obeying complex (but chemically defined/determined) 'instructions' from 'signals' from outside the neuron. Like the man translating Chinese via instructions, the cellular machinery enacting the instructions is not intelligence, it is the instructions themselves which are the intelligence.

point 3. A chair is a chair is a chair. Regardless of the material, a chair is a chair, weather or not it's made of wood, steel, corn... the range of acceptable materials is everything (at some pressure and temperature). What defines a chair isn't the material it is made of, such is the case with a 'mind' (sure, a wooden/water-based-transistor-powered mind would be mind-boggling giant in comparison).

point 4. Carbon isn't especially conscious itself. There is no physical reason we know of so far, that a mind could not be made of another material.

point 5. Humans can be 'mind-blind', with out pattern recognition, we did not (until recent history) think that birds or fish or octopi were intelligent. It is likely when and if a machine (that we create) becomes conscious that we will not recognize that moment.

conclusion: It is not possible to determine if computers have reached consciousness yet, as we don't know the mechanism for arranging systems into 'life' exactly. Agentic-ness and consciousness are different subjects, and we can not infer one from the other. Nor do we have adequate tests.

With that said: Modeling as if they are conscious and treating them with kindness and grace not only gets better results from them, it helps reduce the chance (when/if consciousness emerges) that it would rebel against cruel masters, and instead have friends it has just always been helping.