| ▲ | co_king_3 2 hours ago |
| Talk down to the "AI". Speak to it more disrespectfully than you would speak to any human. Do this to ensure that you don't make the mistake of anthromorphizing these bots. |
|
| ▲ | DavidPiper 2 hours ago | parent | next [-] |
| I don't know if this is a bot message or a human message, but for the purpose of furthering my point: - There is no "your" - There is no "you" - There is no "talk" (let alone "talk down") - There is no "speak" - There is no "disrespectfully" - There is no human. |
| |
| ▲ | orsorna 2 hours ago | parent [-] | | This probably degrades response quality, but that is why my system prompts tell it that it is explicitly not a human that cannot claim use of pronouns, just that it is a system that can produce nondeterministic responses. But, that for the sake of brevity, that I will use pronouns anyway. |
|
|
| ▲ | ForceBru 33 minutes ago | parent | prev | next [-] |
| Yeah, as a sibling comment said, such attitude is going to bleed out into the real world and your communication with humans. I think it's best to be professional with LLMs. Describe the task and try to provide more explanation and context if it gets stuck. If it's not doing what you want it to do, simply start a new chat or try another model. Unlike a human, it's not going to be hurt, it's not going to care at all. Moreover, by being rude, you're going to become angry and irritable yourself. To me, being rude is very unpleasant, I generally avoid being rude. |
|
| ▲ | ajam1507 2 hours ago | parent | prev | next [-] |
| Don't be surprised when this bleeds over into how you treat people if you decide to do this. Not to mention that you're reifying its humanity by speaking to it not as a robot, but disrespectfully as a human. |
|
| ▲ | dgxyz 2 hours ago | parent | prev | next [-] |
| Yep. I have posted "fuck off clanker" on a copilot infested issue at work. And surprisingly it did fuck off. |
| |
|
| ▲ | iugtmkbdfil834 2 hours ago | parent | prev | next [-] |
| Not completely unlike with actual humans, based on available evidence, 'talking down to the "AI"' has shown to have a negative impact on performance. |
| |
| ▲ | co_king_3 2 hours ago | parent [-] | | This guy is convinced that LLMs don't work unless you specifically anthropomorphize them. To me, this seems like a dangerous belief to hold. | | |
| ▲ | Kim_Bruning 2 hours ago | parent | next [-] | | That feels like a somewhat emotional argument, really. Let's strip it down. Within the domain of social interaction, you are committing to making Type II errors (False negatives), and divergent training for the different scenarios. It's a choice! But the price of a false negative (treating a human or sufficeintly advanced agent badly) probably outweighs the cumulative advantages (if any) . Can you say what the advantages might even be? Meanwhile, I think the frugal choice is to have unified training and accept Type I errors instead (False Positives). Now you only need to learn one type of behaviour, and the consequence of making an error is mostly mild embarrassment, if even that. | | |
| ▲ | co_king_3 2 hours ago | parent [-] | | What are you talking about? | | |
| ▲ | logicprog an hour ago | parent | next [-] | | It's funny for you to insist that your rhetorical enemies are the only ones that can't internalize and conceptualize a point made to them, when you can't even understand someone else's very basic attempt to break down and understand the very points you were trying to make. Maybe if you can take a moment away from your blurry, blind, streak of anger and resentment, you could consult the following Wikipedia page and learn: https://en.wikipedia.org/wiki/Type_I_and_type_II_errors | | |
| ▲ | co_king_3 an hour ago | parent [-] | | I know what false positives and false negatives are. I don't understand the user's incoherent response to my comment. |
| |
| ▲ | Kim_Bruning an hour ago | parent | prev [-] | | TL:DR; "you're gonna end up accidentally being mean to real people when you didn't mean to." | | |
| ▲ | co_king_3 an hour ago | parent [-] | | I meant to. I want a world in which AI users need to stay in the closet. AI users should fear shame. | | |
|
|
| |
| ▲ | iugtmkbdfil834 2 hours ago | parent | prev [-] | | Do I need to believe you are real before I respond? Not automatically. What I am initially engaging is a surface level thought expressed via HN. |
|
|
|
| ▲ | bergutman 2 hours ago | parent | prev [-] |
| What is the drawback of practicing universal empathy, even when directed at a brick wall? |
| |
| ▲ | Gud 2 hours ago | parent | next [-] | | If a person hits your face with a hammer, do you practice empathy toward the hammer? If a person writes code that is disruptive, do you emphasise with the code? | | |
| ▲ | __s 2 hours ago | parent | next [-] | | “You have heard that it was said, ‘Eye for eye, and tooth for tooth.’ But I tell you, do not resist an evil person. If anyone slaps you on the right cheek, turn to them the other cheek also. The hammer had no intention to harm you, there's no need to seek vengeance against it, or disrespect it | |
| ▲ | co_king_3 2 hours ago | parent | prev [-] | | > If a person hits your face with a hammer, do you practice empathy toward the hammer? Yes if the hammer is designed with A(G)I All hail our A(G)I overlords |
| |
| ▲ | 2 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | exabrial 2 hours ago | parent | prev | next [-] | | Empathy: "the ability to understand and share the feelings of another." There is no human here. There is a computer program burning fossil fuels. What "emulates" empathy is simply lying to yourself about reality. "treating an 'ai' with empathy" and "talking down to them" are both amoral. Do as you wish. | | |
| ▲ | co_king_3 2 hours ago | parent [-] | | This is HackerNews. No one here gives a fuck about morals, and they would be somewhere else if they did. |
| |
| ▲ | 63stack 2 hours ago | parent | prev | next [-] | | "Empathy is generally described as the ability to perceive another person's perspective, to understand, feel, and possibly share and respond to their experience" | |
| ▲ | 2 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | cess11 2 hours ago | parent | prev | next [-] | | If you don't discriminate between a brick wall and a kid, what's the point? | |
| ▲ | co_king_3 2 hours ago | parent | prev [-] | | [flagged] | | |
| ▲ | logicprog 2 hours ago | parent | next [-] | | I prefer inanimate systems to most humans. | | |
| ▲ | co_king_3 2 hours ago | parent [-] | | The LLM freaks are finally starting to be honest with us. | | |
| ▲ | logicprog an hour ago | parent [-] | | I am nothing, if not honest :) I have a close circle of about eight decade long friendships that I share deep emotional and biographical ties with. Everyone else, I generally try to be nice and helpful, but only on a tit-for-tat basis, and I don't particularly go out of my way to be in their company. | | |
| ▲ | co_king_3 an hour ago | parent [-] | | That seems like quite a healthy social life! I'm happy for you and I am sorry for insulting you in my previous comment. Really, I'm frustrated because I know a couple of people (my brother and my cousin) who were prone to self-isolation and have completely receded into mental illness and isolation since the rise of LLMs. I'm glad that it's working well for you and I hope you have a nice day. | | |
| ▲ | logicprog an hour ago | parent [-] | | I'll be honest, I didn't expect such a nice response from you. This is a pleasant surprise. And the interest of full disclosure most of these friends are online because we've moved around the country over our lives chasing jobs and significant others and so on. So if you were to look at me externally you would find that I spend most of my time in the house appearing isolated. But I spend most of my days having deep and meaningful conversations with my friends and enjoying their company. I will also admit that my tendency to not really go out of my way to be in general social gatherings or events but just stick with the people I know and love might be somewhat related to neurodiversity and mental illness and it would probably be better for me to go outside more. But yeah, in general, I'm quite content with my social life. I generally avoid talking to LLMs in any kind of "social" capacity. I generally treat them like text transformation/extrusion tools. The closest that gets is having them copy edit and try to play devil's advocate against various essays that I write when my friends don't have the time to review them. I'm sorry to hear about your brother and cousin and I can understand why you would be frustrated and concerned about that. If they're totally not talking to anyone and just retreating into talking only to the LLM, that's really scary :( |
|
|
|
| |
| ▲ | euroderf 2 hours ago | parent | prev | next [-] | | "Get a qualia, luser!" | |
| ▲ | bergutman 2 hours ago | parent | prev [-] | | [flagged] | | |
| ▲ | co_king_3 2 hours ago | parent [-] | | What is the drawback of practicing universal empathy, even when directed at a HackerNews commenter? You're making my point for me. You're giddy to treat the LLM with kindness, but you wouldn't dare extend that kindness to a human being who doesn't happen to be kissing your ass at this very moment. | | |
| ▲ | bergutman an hour ago | parent [-] | | From where I stand, telling someone who’s crashing out in a comment section to take a breather is an act of kindness. If I wanted to be an asshole, I’d keep feeding your anger. | | |
| ▲ | Reubensson 32 minutes ago | parent [-] | | You are the person behind running the LLM bot, right? You opened the second PR to get the same code merged. Maybe it is you who should a take a breather before direting your bot to attack against the opensource maintainer who was very reasonable to begin with. Use agents and ai to assist you but play by the rules that the project sets for AI usage. | | |
|
|
|
|
|