Remix.run Logo
quectophoton 3 hours ago

> Humans must not anthropomorphise AI systems.

Can someone explain why this is a bad thing, while at the same time it's a good thing to say stuff like "put a computer to sleep", "hibernate", "killing" processes, processes having "child" processes, "reaping", "what does the error say?", "touch", etc?

To me that's just language, and humans just using casual language.

glenstein 2 hours ago | parent | next [-]

It's a great question, because I do think there are many cases that are neutral, or ones we're able to responsibly distinguish or even cases where it would be an appropriate and necessary form of empathy (I'm imagining some future sci-fi reality where we actually get conscious machines, so not something that exists right now).

But I think it's also at the root of disastrous failures to comprehend, like the quasi-psychosis of the Google engineer who "knows what they saw", the now infamous Kevin Roose article or, more recently, the pitifully sad Richard Dawkins claim that Claudia (sic) must be conscious, not because of any investigation of structure or function whatsoever, but because the text generation came with a pang of human familiarity he empathized with.

srdjanr 2 hours ago | parent | prev | next [-]

The harm is in actually believing AI has wants, intentions, feelings, etc.

Saying that I killed a process won't make me more likely to believe that a process is human-like, because it's quite obviously not.

But because AI does sound like a human, anthropomorphising it will reinforce that belief.

jimbokun an hour ago | parent | prev | next [-]

Those phrases are not anthropomorphizing the computers. Just various forms of analogies and broadening of word meanings.

An example of anthropomorphizing is the people who have literally come to believe they are in romantic relationships with an LLM.

moduspol 39 minutes ago | parent [-]

What about saying "please" and "thank you" to the LLM?

jplusequalt 10 minutes ago | parent [-]

If I had a dollar for every time I've said "thank you" to my computer after my code finally compiles, I'd be able to retire.

3form 3 hours ago | parent | prev | next [-]

These are just words, yes, and I believe it harmless. But describing the LLM machinery as if it thinks is one thing when used as a common parlance, and another when people truly believe that there's some actual thinking or living going on. This "law" is for there to be no latter.

j2kun 2 hours ago | parent | prev | next [-]

The people who know what a "child process" is are under no false pretenses about the humanity of the underlying system.

The people who are writing op eds in major news publications about how their favorite chatbot is an "astonishing creature" and how it truly understands them are the ones who need this sort of law.

layer8 3 hours ago | parent | prev | next [-]

Maybe read the corresponding section of the article.

vunderba 3 hours ago | parent | prev | next [-]

That’s a different thing altogether. Read up on the history of Eliza, one of the earliest attempts at a chatbot and its unsettling implications.

https://www.history.com/articles/ai-first-chatbot-eliza-arti...

glenstein 2 hours ago | parent [-]

I think it's bad manners to bluntly tell someone they should "read up" on something because it naturally reads as a kind of a closeted accusation of not being sufficiently well informed. There are ways of broaching the topic of what background knowledge is informing their perspective that don't involve the accusation.

Just to add a small bit of anecdotal value so this comment isn't just a scold: I one time many years ago suggesting an elegant way for Twitter to handle long form text without changing it's then-iconic 140 character limit was to treat it like an attachment, like a video or image. Today, you can see a version of that in how Claude takes large pastes and treats them like attached text blobs, or to a lesser extent in how Substack Notes can reference full size "posts", another example of short form content "attaching" longer form.

I was bluntly told to "look up twitlonger", which I suppose could have been helpful if I had indeed not known about twitlonger, but I had, and it wasn't what I had in mind. I did learn something from it though, which was that it's a mode of communication that implies you don't know what you're talking about with plausible deniability, which I suspect is too irresistible to lovers of passive aggression to go unused.

vunderba 2 hours ago | parent [-]

It wasn't intended as such, but I take your point.

To provide a bit more context: Weizenbaum (a computer scientist in the 60s) developed ELIZA, a LISP-based chatbot that was loosely modeled on Rogerian psychotherapy. It was designed to respond in a reflective way in order to elicit details from the user.

What he found was that, despite the program being relatively primitive in nature (relying on simple natural language parsing heuristics), people he regarded as otherwise intelligent and rational would disclose remarkable amounts of personal information and quickly form emotional attachments to what was, in reality, little more than a glorified pattern-matching system.

quectophoton 2 hours ago | parent [-]

If it helps, I didn't find anything wrong with your comment.

I appreciate the link and the info :)

arduanika 3 hours ago | parent | prev | next [-]

There's a boundary between knowing vs. forgetting that it's a metaphor. When you use convenient language like in your examples, you tend to remain aware of the difference, or at least you can recall it when asked. When some people talk about AI, they've lost track completely.

I don't love the recommendations in TFA. The author is trying to artificially restrain and roll back human language, which has already evolved to treat a chatbot as a conversational partner. But I do think there's usefulness in using these more pedantic forms once in a while, to remind yourself that it's just a computer program.

an hour ago | parent | prev | next [-]
[deleted]
bitwize 3 hours ago | parent | prev | next [-]

Dijkstra once said that "The question of whether machines can think is about as interesting as that of whether submarines can swim."

I think I understand his meaning. He wasn't claiming that machines cannot think, but that one must be clear on what one means by "thinking" and "swimming" in statements of that sort. I used to work on autonomous submarines, and "swimming" was the verb we casually used to describe autonomous powered movement under water. There are even some biomimetic machines that really move like fish, squids, jellyfish, etc. Not the ones that I worked on, but still.

For me, if it's legitimate to say that these devices swim, it's not out of line to say that a computer thinks, even in a non-AI context, e.g.: "The application still thinks the authentication server is online."

Eisenstein 2 hours ago | parent | prev [-]

The people who advocate for not anthropomorphizing are afraid of the implications of integrating these systems into society with implicit human framing. By attributing to AIs human qualities, we will develop empathy for them and we will start to create a role for them in society as a being deserving moral consideration.