Remix.run Logo
epiccoleman 3 days ago

> I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.

> ChatGPT deserves no more or less empathy than a fork does.

I agree completely that ChatGPT deserves zero empathy. It can't feel, it can't care, it can't be hurt by your rudeness.

But I think treating your LLM with at least basic kindness is probably the right way to be. Not for the LLM - but for you.

It's not like, scientific - just a feeling I have - but it feels like practicing callousness towards something that presents a simulation of "another conscious thing" might result in you acting more callous overall.

So, I'll burn an extra token or two saying "please and thanks".

totallymike 3 days ago | parent | next [-]

I do agree that just being nicer is a good idea, even when it's not required, and for largely the same reasons.

Incidentally, I almost crafted an example of whispering all the slurs and angry words you can think of in the general direction of your phone's autocomplete as an illustration of why LLMs don't deserve empathy, but ended up dropping it because even if nobody is around to hear it, it still feels unhealthy to put yourself in that frame of mind, much less make a habit of it.

barnas2 3 days ago | parent | prev | next [-]

I believe there's also some research showing that being nice gets better responses. Given that it's trained on real conversations, and that's how real conversation works, I'm not surprised.

JKCalhoun 3 days ago | parent | prev | next [-]

Hard to not recall a Twilight Zone and even a Night Gallery episode where those cruel to machines were just basically cruel people generally.

goopypoop 3 days ago | parent | prev | next [-]

do you also beg your toilet to flush?

duggan 3 days ago | parent | next [-]

If it could hold a conversation I might.

I also believe AI is a tool, but I'm sympathetic to the idea that, due to some facet of human psychology, being "rude" might train me to be less respectful in other interactions.

Ergo, I might be more likely to treat you like a toilet.

goopypoop 3 days ago | parent | next [-]

Any "conversation" with a machine is dehumanizing.

Are you really in danger of forgetting the humanity of strangers because you didn't anthropomorphize a text generator? If so, I don't think etiquette is the answer

epiccoleman 2 days ago | parent | next [-]

the thing is, though, that the text generator self-anthropomorphizes.

perhaps if an LLM were trained to be less conversational and more robotic, i would feel less like being polite to it. i never catch myself typing "thanks" to my shell for returning an `ls`.

goopypoop 2 days ago | parent | next [-]

> the thing is, though, that the text generator self-anthropomorphizes.

and that is why it must die!

goopypoop 2 days ago | parent | prev [-]

alias 'thanks'="echo You\'re welcome!"

duggan 3 days ago | parent | prev [-]

Words can change minds, it doesn't seem like a huge leap.

Your condescension is noted though.

Filligree 3 days ago | parent | prev [-]

It also makes the LLM work better. If you’re rude to it it won’t want to help as much.

totallymike 3 days ago | parent [-]

I understand what you're saying, which is that the response it generates is influenced by your prompt, but feel compelled to observe that LLMs cannot want anything at all, since they are software and have no motivations.

I'd probably have passed this over if it wasn't contextually relevant to the discussion, but thank you for your patience with my pedantry just the same.

epiccoleman 2 days ago | parent | prev [-]

if the primary mode of interaction with my toilet was conversational, then yeah, i'd probably be polite to the toilet. i might even feel a genuine sense of gratitude since it does provide a highly useful service.

jennyholzer 3 days ago | parent | prev [-]

> So, I'll burn an extra token or two saying "please and thanks"

I won't, and I think you're delusional for doing so

losvedir 3 days ago | parent | next [-]

Interesting. I wonder if this is exactly an example of what the person you're responding to just now is saying. That being rude to an LLM has normalized that behavior such that you feel comfortable being rude to this person.

totallymike 3 days ago | parent | prev [-]

Eh, this doesn't strike me as wrong-headed. They aren't doing it because they feel duty-bound to be polite to the LLM, they maintain politeness because they choose to stay in that state of mind, even if they're just talking to a chatbot.

If you're writing prompts all day, and the extra tokens add up, I can see being clear but terse making a good deal of sense, but if you can afford the extra tokens, and it feels better to you, why not?

gardnr 2 days ago | parent [-]

The prompts that I use in production are polite.

Looking at it from a statistical perspective: If we imagine text from the public internet being used during pretraining we can imagine, with few exceptions, that polite requests achieve their objective more often than terse or plainly rude requests. This will be severely muted during fine-tuning, but it is still there in the depths.

It's also easier in English to conjugate a command form simply by prefixing "Please" which employs the "imperative mood".

We have moved up a level in abstraction. It used to be punch cards, then assembler, then syntax, now words. They all do the same thing: instruct a machine. Understanding how the models are designed and trained can help us be more effective in that; just like understanding how compilers work can make us better programmers.