Remix.run Logo
kraftman an hour ago

I talk politely to LLMs because I talk politely.

co_king_3 an hour ago | parent [-]

[flagged]

kraftman an hour ago | parent [-]

I am! But seriously, I've seen some conversations of how people talk to LLMs and it seems kinda insane how people choose to talk when there are no consequences. Is that how they always want to talk to people but know that they can't?

trollbridge 15 minutes ago | parent | next [-]

Why should there be consequences for typing anything as inputs into a big convolution matrix?

kraftman 10 minutes ago | parent [-]

I don't think I implied that there should be. What I mean is, for me to talk/type considerably differently to an LLM would take more mental effort than just talking how I normally talk, whereas some people seem to put effort into being rude/mean to LLMs.

So either they are putting extra effort into talking worse to LLMs, or they are they are putting more effort into general conversations with humans (to not act like their default).

trollbridge 5 minutes ago | parent [-]

I do not “talk” to LLMs the same way I talk to a human.

I would never just cut and paste blocks of code, error messages, and then cryptic ways to ask for what I want at a human. But I do with an LLM since it gets me the best answer that way.

With humans I don’t manipulate them to do what I want.

With an LLM I do.

famouswaffles an hour ago | parent | prev [-]

Humans are not moral agents, and most of humanity would commit numerous atrocities in the right conditions. Unfortunately, history has shown that 'the right conditions' doesn't take a whole lot, so this really should come as no surprise.

It will also be interesting to see how long talking to LLMs will truly have 'no consequences'. An angry blog post isn't a big deal all things considered, but that is likely going to be the tip of the iceberg as these agents get more and more competent in the future.