| ▲ | retired 2 hours ago |
| I talk politely to LLMs in case our AI overlords in the future will scan my comments to see if I am worthy of food rations. Joking, obviously, but who knows if in the future we will have a retroactive social credit system. For now I am just polite to them because I'm used to it. |
|
| ▲ | adsteel_ 2 hours ago | parent | next [-] |
| I talk politely to LLMs because I don't want any impoliteness to leak out to my interactions with humans. |
| |
|
| ▲ | Ekaros 2 hours ago | parent | prev | next [-] |
| I wonder if that future will have free speech. Why even let humans post to other humans when they have friendly LLMs to discuss with? Do we need to be good little humans in our discussions to get our food? |
|
| ▲ | kraftman an hour ago | parent | prev | next [-] |
| I talk politely to LLMs because I talk politely. |
| |
| ▲ | co_king_3 an hour ago | parent [-] | | [flagged] | | |
| ▲ | kraftman an hour ago | parent [-] | | I am! But seriously, I've seen some conversations of how people talk to LLMs and it seems kinda insane how people choose to talk when there are no consequences. Is that how they always want to talk to people but know that they can't? | | |
| ▲ | trollbridge 15 minutes ago | parent | next [-] | | Why should there be consequences for typing anything as inputs into a big convolution matrix? | | |
| ▲ | kraftman 10 minutes ago | parent [-] | | I don't think I implied that there should be. What I mean is, for me to talk/type considerably differently to an LLM would take more mental effort than just talking how I normally talk, whereas some people seem to put effort into being rude/mean to LLMs. So either they are putting extra effort into talking worse to LLMs, or they are they are putting more effort into general conversations with humans (to not act like their default). | | |
| ▲ | trollbridge 5 minutes ago | parent [-] | | I do not “talk” to LLMs the same way I talk to a human. I would never just cut and paste blocks of code, error messages, and then cryptic ways to ask for what I want at a human. But I do with an LLM since it gets me the best answer that way. With humans I don’t manipulate them to do what I want. With an LLM I do. |
|
| |
| ▲ | famouswaffles an hour ago | parent | prev [-] | | Humans are not moral agents, and most of humanity would commit numerous atrocities in the right conditions. Unfortunately, history has shown that 'the right conditions' doesn't take a whole lot, so this really should come as no surprise. It will also be interesting to see how long talking to LLMs will truly have 'no consequences'. An angry blog post isn't a big deal all things considered, but that is likely going to be the tip of the iceberg as these agents get more and more competent in the future. |
|
|
|
|
| ▲ | WarmWash an hour ago | parent | prev | next [-] |
| My wager is to treat the AI well, because if AI overlords come about, then you stand to gain, and if they don't, nothing changes. This also comes without the caveat of Pascals wager, that you don't what god to worship. |
|
| ▲ | mystraline 2 hours ago | parent | prev [-] |
| > Joking, obviously, but who knows if in the future we will have a retroactive social credit system. China doesnt actually have that. It was pure propaganda. In fact, its the USA who has it. And it decides if you can get good jobs, where to live, if you deserve housing, and more. |
| |
| ▲ | an hour ago | parent | next [-] | | [deleted] | |
| ▲ | co_king_3 an hour ago | parent | prev [-] | | Usually when Republicans say "China is doing [insert horrible thing here]" it means: "We (read: Republicans and Democrats) would like to start doing [insert horrible thing here] to American people." |
|