Remix.run Logo
aaroninsf 4 days ago

I wouldn't say popular

It has a strong smell of "stop trying to make fetch happen, Gretchen."

marcosdumay 3 days ago | parent | next [-]

I'm seeing a lot of it on the internet recently.

People were also starting to equate LLMs to the MS Office's Clippy. But somebody made a popular video showing that no, Clippy was so much better than LLMs in a variety or way, and people seem to have stopped.

mossTechnician 3 days ago | parent | next [-]

"Clippy just wanted to help."

https://www.youtube.com/watch?v=2_Dtmpe9qaQ

relwin 3 days ago | parent | prev [-]

And before Clippy we had Microsoft BoB! Super popular as I recall...

bongodongobob 4 days ago | parent | prev | next [-]

It is. I'm seeing it all over social media lately.

https://trends.google.com/trends/explore?date=today%203-m&ge...

IlikeKitties 3 days ago | parent | prev | next [-]

It's great i've called an LLM a fucking clanker and got to human support as a result.

bloqs 3 days ago | parent | prev | next [-]

forced memes are considerably easier than they used to be

bbor 4 days ago | parent | prev [-]

It's definitely popular online, specifically on Reddit, Bluesky, Twitter, and TikTok. There's communities that have formed around their anti-AI stance[1][2][3], and after multiple organic efforts to "brainstorm slurs" for people who use AI[4], "clanker" has come out on top. This goes back at least 2 years[6] in terms of grassroots talk, and many more to the original Clone Wars usage[7].

For those who can see the obvious: don't worry, there's plenty of pushback regarding the indirect harm of gleeful fantasy bigotry[8][9]. When you get to the less popular--but still popular!--alternatives like "wireback" and "cogsucker", it's pretty clear why a youth crushed by Woke mandates like "don't be racist plz" are so excited about unproblematic hate.

This is edging on too political for HN, but I will say that this whole thing reminds me a tad of things like "kill all men" (shoutout to "we need to kill AI artist"[10]) and "police are pigs". Regardless of the injustices they were rooted in, they seem to have gotten popular in large part because it's viscerally satisfying to express yourself so passionately.

[1] https://www.reddit.com/r/antiai/

[2] https://www.reddit.com/r/LudditeRenaissance/

[3] https://www.reddit.com/r/aislop/

[4] All the original posts seem to have now been deleted :(

[6] https://www.reddit.com/r/AskReddit/comments/13x43b6/if_we_ha...

[7] https://web.archive.org/web/20250907033409/https://www.nytim...

[8] https://www.rollingstone.com/culture/culture-features/clanke...

[9] https://www.dazeddigital.com/life-culture/article/68364/1/cl...

[10] https://knowyourmeme.com/memes/we-need-to-kill-ai-artist

totallymike 4 days ago | parent | next [-]

Citations eight and nine amuse me.

I readily and merrily agree with the articles that deriving slurs from existing racist or homophobic slurs is a problem, and the use of these terms in fashions that mirror actual racial stereotypes (e.g. "clanka") is pretty gross.

That said, I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.

ChatGPT deserves no more or less empathy than a fork does, and asking for such makes about as much sense.

Additionally, I'm not sure where the "crushed by Woke" nonsense comes from. "It's so hard for the kids nowadays, they can't even be racist anymore!" is a pretty strange take, and shoving it in to your comment makes it very difficult to interpret your intent in a generous manner, whatever it may be.

epiccoleman 3 days ago | parent | next [-]

> I think that asking people to treat ChatGPT with "kindness and respect" is patently embarrassing. We don't ask people to be nice to their phone's autocorrect, or to Siri, or to the forks in their silverware drawer, because that's stupid.

> ChatGPT deserves no more or less empathy than a fork does.

I agree completely that ChatGPT deserves zero empathy. It can't feel, it can't care, it can't be hurt by your rudeness.

But I think treating your LLM with at least basic kindness is probably the right way to be. Not for the LLM - but for you.

It's not like, scientific - just a feeling I have - but it feels like practicing callousness towards something that presents a simulation of "another conscious thing" might result in you acting more callous overall.

So, I'll burn an extra token or two saying "please and thanks".

totallymike 3 days ago | parent | next [-]

I do agree that just being nicer is a good idea, even when it's not required, and for largely the same reasons.

Incidentally, I almost crafted an example of whispering all the slurs and angry words you can think of in the general direction of your phone's autocomplete as an illustration of why LLMs don't deserve empathy, but ended up dropping it because even if nobody is around to hear it, it still feels unhealthy to put yourself in that frame of mind, much less make a habit of it.

barnas2 3 days ago | parent | prev | next [-]

I believe there's also some research showing that being nice gets better responses. Given that it's trained on real conversations, and that's how real conversation works, I'm not surprised.

JKCalhoun 3 days ago | parent | prev | next [-]

Hard to not recall a Twilight Zone and even a Night Gallery episode where those cruel to machines were just basically cruel people generally.

goopypoop 3 days ago | parent | prev | next [-]

do you also beg your toilet to flush?

duggan 3 days ago | parent | next [-]

If it could hold a conversation I might.

I also believe AI is a tool, but I'm sympathetic to the idea that, due to some facet of human psychology, being "rude" might train me to be less respectful in other interactions.

Ergo, I might be more likely to treat you like a toilet.

goopypoop 3 days ago | parent | next [-]

Any "conversation" with a machine is dehumanizing.

Are you really in danger of forgetting the humanity of strangers because you didn't anthropomorphize a text generator? If so, I don't think etiquette is the answer

epiccoleman 2 days ago | parent | next [-]

the thing is, though, that the text generator self-anthropomorphizes.

perhaps if an LLM were trained to be less conversational and more robotic, i would feel less like being polite to it. i never catch myself typing "thanks" to my shell for returning an `ls`.

goopypoop 2 days ago | parent | next [-]

> the thing is, though, that the text generator self-anthropomorphizes.

and that is why it must die!

goopypoop 2 days ago | parent | prev [-]

alias 'thanks'="echo You\'re welcome!"

duggan 3 days ago | parent | prev [-]

Words can change minds, it doesn't seem like a huge leap.

Your condescension is noted though.

Filligree 3 days ago | parent | prev [-]

It also makes the LLM work better. If you’re rude to it it won’t want to help as much.

totallymike 3 days ago | parent [-]

I understand what you're saying, which is that the response it generates is influenced by your prompt, but feel compelled to observe that LLMs cannot want anything at all, since they are software and have no motivations.

I'd probably have passed this over if it wasn't contextually relevant to the discussion, but thank you for your patience with my pedantry just the same.

epiccoleman 2 days ago | parent | prev [-]

if the primary mode of interaction with my toilet was conversational, then yeah, i'd probably be polite to the toilet. i might even feel a genuine sense of gratitude since it does provide a highly useful service.

jennyholzer 3 days ago | parent | prev [-]

> So, I'll burn an extra token or two saying "please and thanks"

I won't, and I think you're delusional for doing so

losvedir 3 days ago | parent | next [-]

Interesting. I wonder if this is exactly an example of what the person you're responding to just now is saying. That being rude to an LLM has normalized that behavior such that you feel comfortable being rude to this person.

totallymike 3 days ago | parent | prev [-]

Eh, this doesn't strike me as wrong-headed. They aren't doing it because they feel duty-bound to be polite to the LLM, they maintain politeness because they choose to stay in that state of mind, even if they're just talking to a chatbot.

If you're writing prompts all day, and the extra tokens add up, I can see being clear but terse making a good deal of sense, but if you can afford the extra tokens, and it feels better to you, why not?

gardnr 2 days ago | parent [-]

The prompts that I use in production are polite.

Looking at it from a statistical perspective: If we imagine text from the public internet being used during pretraining we can imagine, with few exceptions, that polite requests achieve their objective more often than terse or plainly rude requests. This will be severely muted during fine-tuning, but it is still there in the depths.

It's also easier in English to conjugate a command form simply by prefixing "Please" which employs the "imperative mood".

We have moved up a level in abstraction. It used to be punch cards, then assembler, then syntax, now words. They all do the same thing: instruct a machine. Understanding how the models are designed and trained can help us be more effective in that; just like understanding how compilers work can make us better programmers.

card_zero 3 days ago | parent | prev | next [-]

No time for a long reply, but what I want to write has video games at the center. Exterminate the aliens! is fine, in a game. But if you sincerely believe it's not a game, then you're being cruel (or righteous, if you think the aliens are evil), even though it isn't real.

(This also applies to forks. If you sincerely anthropomorphize a fork, you're silly, but you'd better treat that fork with respect, or you're silly and unpleasant.)

What do I mean by "fine", though? I just mean it's beyond my capacity to analyse, so I'm not going to proclaim a judgment on it, because I can't and it's not my business.

If you know it's a game but it seems kind of racist and you like that, well, this is the player's own business. I can say "you should be less racist" but I don't know what processing the player is really doing, and the player is not on trial for playing, and shouldn't be.

So yes, the kids should have space to play at being racist. But this is a difficult thing to express: people shouldn't be bad, but also, people should have freedom, including the freedom to be bad, which they shouldn't do.

I suppose games people play include things they say playfully in public. Then I'm forced to decide whether to say "clanker" or not. I think probably not, for now, but maybe I will if it becomes really commonplace.

dingnuts 3 days ago | parent [-]

> But if you sincerely believe it's not a game, then you're being cruel (or righteous, if you think the aliens are evil), even though it isn't real.

let me stop you right there. you're making a lot of assumptions about the shapes life can take. encountering and fighting a grey goo or tyrannid invasion wouldn't have a moral quality any more than it does when a man fights a hungry bear in the woods

it's just nature, eat or get eaten.

if we encounter space monks then we'll talk about morality

bbor 2 days ago | parent | prev [-]

Sorry, I was unclear — that racism comment was tongue in cheek. Regardless of political leanings, I figured we can all agree that racism is bad!

I generally agree re:chatGPT in that it doesn’t have moral standing on its own, but still… it does speak. Being mean to a fork is a lot different from being mean to a chatbot, IMHO. The list of things that speak just went from 1 to 2 (humans and LLMs), so it’s natural to expect some new considerations. Specifically, the risk here is that you are what you do.

Perhaps a good metaphor would be cyberbullying. Obviously there’s still a human on the other side of that, but I do recall a real “just log off, it’s not a real problem, kids these days are so silly” sentiment pre, say, 2015.

_dain_ 3 days ago | parent | prev [-]

>after multiple organic efforts to "brainstorm slurs" for people who use AI

no wonder it sounds so lame, it was "brainstormed" (=RLHFed) by committee of redditors

this is like the /r/vexillology of slurs