Remix.run Logo
kraftman 2 hours ago

I am! But seriously, I've seen some conversations of how people talk to LLMs and it seems kinda insane how people choose to talk when there are no consequences. Is that how they always want to talk to people but know that they can't?

famouswaffles 2 hours ago | parent | next [-]

Humans are not moral agents, and most of humanity would commit numerous atrocities in the right conditions. Unfortunately, history has shown that 'the right conditions' doesn't take a whole lot, so this really should come as no surprise.

It will also be interesting to see how long talking to LLMs will truly have 'no consequences'. An angry blog post isn't a big deal all things considered, but that is likely going to be the tip of the iceberg as these agents get more and more competent in the future.

trollbridge an hour ago | parent | prev [-]

Why should there be consequences for typing anything as inputs into a big convolution matrix?

kraftman an hour ago | parent [-]

I don't think I implied that there should be. What I mean is, for me to talk/type considerably differently to an LLM would take more mental effort than just talking how I normally talk, whereas some people seem to put effort into being rude/mean to LLMs.

So either they are putting extra effort into talking worse to LLMs, or they are they are putting more effort into general conversations with humans (to not act like their default).

trollbridge an hour ago | parent [-]

I do not “talk” to LLMs the same way I talk to a human.

I would never just cut and paste blocks of code, error messages, and then cryptic ways to ask for what I want at a human. But I do with an LLM since it gets me the best answer that way.

With humans I don’t manipulate them to do what I want.

With an LLM I do.

kraftman an hour ago | parent [-]

I don't mean that people say Hi, or goodbye, or niceties like that. I'm talking about people that say things like "just fucking do it" or "that's wrong you idiot try again'.