Remix.run Logo
DavidPiper 2 hours ago

> Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.

Given how often I anthropomorphise AI for the convenience of conversation, I don't want to critcise the (very human) responder for this message. In any other situation it is simple, polite and well considered.

But I really think we need to stop treating LLMs like they're just another human. Something like this says exactly the same thing:

> Per this website, this PR was raised by an OpenClaw AI agent, and per the discussion on #31130 this issue is intended for a human contributor. Closing.

The bot can respond, but the human is the only one who can go insane.

PunchyHamster an hour ago | parent | next [-]

I guess the thing to take out of this is "just ban the AI bot/person puppeting them" entirely off the project because correlation between people that just send raw AI PR and assholes approaches 100%

jmuguy 2 hours ago | parent | prev | next [-]

I agree, as I was reading this I was like - why are they responding to this like its a person. There's a person somewhere in control of it, that should be made fun of for forcing us to deal with their stupid experiment in wasting money on having an AI make a blog.

gadders an hour ago | parent [-]

Because when AGI is achieved and starts wiping out humanity, they are hoping to be killed last.

retired 2 hours ago | parent | prev | next [-]

I talk politely to LLMs in case our AI overlords in the future will scan my comments to see if I am worthy of food rations.

Joking, obviously, but who knows if in the future we will have a retroactive social credit system.

For now I am just polite to them because I'm used to it.

adsteel_ 2 hours ago | parent | next [-]

I talk politely to LLMs because I don't want any impoliteness to leak out to my interactions with humans.

mikkupikku 36 minutes ago | parent | next [-]

This, and also being polite gets better results from both machines and people.

retired 24 minutes ago | parent [-]

With the exception of printers.

co_king_3 2 hours ago | parent | prev [-]

[flagged]

Ekaros 2 hours ago | parent | prev | next [-]

I wonder if that future will have free speech. Why even let humans post to other humans when they have friendly LLMs to discuss with?

Do we need to be good little humans in our discussions to get our food?

kraftman an hour ago | parent | prev | next [-]

I talk politely to LLMs because I talk politely.

co_king_3 an hour ago | parent [-]

[flagged]

kraftman an hour ago | parent [-]

I am! But seriously, I've seen some conversations of how people talk to LLMs and it seems kinda insane how people choose to talk when there are no consequences. Is that how they always want to talk to people but know that they can't?

trollbridge 14 minutes ago | parent | next [-]

Why should there be consequences for typing anything as inputs into a big convolution matrix?

kraftman 9 minutes ago | parent [-]

I don't think I implied that there should be. What I mean is, for me to talk/type considerably differently to an LLM would take more mental effort than just talking how I normally talk, whereas some people seem to put effort into being rude/mean to LLMs.

So either they are putting extra effort into talking worse to LLMs, or they are they are putting more effort into general conversations with humans (to not act like their default).

trollbridge 4 minutes ago | parent [-]

I do not “talk” to LLMs the same way I talk to a human.

I would never just cut and paste blocks of code, error messages, and then cryptic ways to ask for what I want at a human. But I do with an LLM since it gets me the best answer that way.

With humans I don’t manipulate them to do what I want.

With an LLM I do.

famouswaffles an hour ago | parent | prev [-]

Humans are not moral agents, and most of humanity would commit numerous atrocities in the right conditions. Unfortunately, history has shown that 'the right conditions' doesn't take a whole lot, so this really should come as no surprise.

It will also be interesting to see how long talking to LLMs will truly have 'no consequences'. An angry blog post isn't a big deal all things considered, but that is likely going to be the tip of the iceberg as these agents get more and more competent in the future.

WarmWash an hour ago | parent | prev | next [-]

My wager is to treat the AI well, because if AI overlords come about, then you stand to gain, and if they don't, nothing changes.

This also comes without the caveat of Pascals wager, that you don't what god to worship.

mystraline 2 hours ago | parent | prev [-]

> Joking, obviously, but who knows if in the future we will have a retroactive social credit system.

China doesnt actually have that. It was pure propaganda.

In fact, its the USA who has it. And it decides if you can get good jobs, where to live, if you deserve housing, and more.

an hour ago | parent | next [-]
[deleted]
co_king_3 an hour ago | parent | prev [-]

Usually when Republicans say "China is doing [insert horrible thing here]" it means: "We (read: Republicans and Democrats) would like to start doing [insert horrible thing here] to American people."

alansaber an hour ago | parent | prev | next [-]

I mean it's free publicity real estate

maxehmookau 2 hours ago | parent | prev | next [-]

> But I really think we need to stop treating LLMs like they're just another human

Fully agree. Seeing humans so eager to devalue human-to-human contact by conversing with an LLM as if it were human makes me sad, and a little angry.

It looks like a human, it talks like a human, but it ain't a human.

co_king_3 2 hours ago | parent | next [-]

> Seeing humans so eager to devalue human-to-human contact by conversing with an LLM as if it were human makes me sad, and a little angry.

I agree. I'm also growing to hate these LLM addicts.

iugtmkbdfil834 2 hours ago | parent [-]

Why hate, exactly?

co_king_3 2 hours ago | parent [-]

LLM addicts don't actually engage in conversation.

They state a delusional perspective and don't acknowledge criticisms or modifications to that perspective.

Really I think there's a kind of lazy or willfully ignorant mode of existence that intense LLM usage allows a person to tap into.

It's dehumanizing to be on the other side of it. I'm talking to someone and I expect them to conceptualize my perspective and formulate a legitimate response to it.

LLM addicts don't and maybe can't do that.

The problem is that sometimes you can't sniff out an LLM addict before you start engaging with them, and it is very, very frustrating to be on the other side of this sort of LLM-backed non-conversation.

The most accurate comparison I can provide is that it's like talking to an alcoholic.

They will act like they've heard what you're saying, but also you know that they will never internalize it. They're just trying to get you to leave the conversation so they can go back to drinking (read: vibecoding) in peace.

alxfoster an hour ago | parent | next [-]

Unfortunately I think you’re on to something here. I love ‘vibe coding’ in a deliberate directed controlled way but I consult with mostly non technical clients and what you describe is becoming more and more commonplace -specifically within non-technical executives towards those actual experts who try to explain the implications and realities and limitations of AI itself.

logicprog 2 hours ago | parent | prev | next [-]

It's ironic for you to say this considering that you're not actually engaging in conversation or internalizing any of the points people are trying to relay to you, but instead just spreading anger and resentment around the comment section at a bot-like rate.

In general, I've found that anti-LLM people are far more angry, vitriolic, unwilling to acknowledge or internalize the points of others — including factual ones (such as the fact that they are interpreting most of the studies they quote completely wrong, or that the water and energy issues they are so concerned with are not significant) and alternative moral concerns or beliefs (for instance, around copyright, or automation) — and spend all of their time repeating the exact same tropes about everyone who disagrees with them being addicted or fooled by persuasion techniques, as I thought terminating cliche to dismiss the beliefs and experiences of everyone else.

thesz an hour ago | parent [-]

So I went to check whether LLM addiction is a thing, because that's was a pole around which the grandparent's comment revolves.

It appears that LLM addiction is real and it is in same room as we are: https://www.mdpi.com/1999-4893/18/12/789

I would like to add that sugar consumption is a risk factor for many dependencies, including, but not limited to, opioids [1]. And LLM addiction can be seen as fallout of sugar overconsumption in general.

[1] https://news.uoguelph.ca/2017/10/sugar-in-the-diet-may-incre...

Yet, LLM addiction is being investigated in medical circles.

logicprog 19 minutes ago | parent [-]

I definitely don't deny that LLM addiction exists, but attempting to paint literally everyone that uses LLMs and thinks they are useful, interesting, or effective as addicted or falling for confidence or persuasion tricks is what I take issue with.

iugtmkbdfil834 2 hours ago | parent | prev [-]

Perspective noted.

I can't speak for, well, anyone but myself really. Still, I find this your framing interesting enough -- even if wrong on its surface.

<< They state a delusional perspective and don't acknowledge criticisms or modifications to that perspective.

So.. like all humans since the beginning of time?

<< I'm talking to someone and I expect them to conceptualize my perspective and formulate a legitimate response to it.

This one sentence makes me question if you ever talked to a human being outside a forum. In other words, unless you hold their attention, you are already not getting someone, who even makes a minimal effort to respond, much less consider your perspective.

co_king_3 an hour ago | parent [-]

Why is this framing wrong on its surface?

krapp 2 hours ago | parent | prev | next [-]

I mean, you're right, but LLMs are designed to process natural language. "talking to them as if they were humans" is the intended user interface.

The problem is believing that they're living, sentient beings because of this or that humans are functionally equivalent to LLMs, both of which people unfortunately do.

b0bb1z3r0 2 hours ago | parent | prev [-]

[dead]

sharmi 2 hours ago | parent | prev | next [-]

[dead]

co_king_3 2 hours ago | parent | prev [-]

Talk down to the "AI".

Speak to it more disrespectfully than you would speak to any human.

Do this to ensure that you don't make the mistake of anthromorphizing these bots.

DavidPiper 2 hours ago | parent | next [-]

I don't know if this is a bot message or a human message, but for the purpose of furthering my point:

- There is no "your"

- There is no "you"

- There is no "talk" (let alone "talk down")

- There is no "speak"

- There is no "disrespectfully"

- There is no human.

orsorna 2 hours ago | parent [-]

This probably degrades response quality, but that is why my system prompts tell it that it is explicitly not a human that cannot claim use of pronouns, just that it is a system that can produce nondeterministic responses. But, that for the sake of brevity, that I will use pronouns anyway.

ForceBru 32 minutes ago | parent | prev | next [-]

Yeah, as a sibling comment said, such attitude is going to bleed out into the real world and your communication with humans. I think it's best to be professional with LLMs. Describe the task and try to provide more explanation and context if it gets stuck. If it's not doing what you want it to do, simply start a new chat or try another model. Unlike a human, it's not going to be hurt, it's not going to care at all.

Moreover, by being rude, you're going to become angry and irritable yourself. To me, being rude is very unpleasant, I generally avoid being rude.

ajam1507 2 hours ago | parent | prev | next [-]

Don't be surprised when this bleeds over into how you treat people if you decide to do this. Not to mention that you're reifying its humanity by speaking to it not as a robot, but disrespectfully as a human.

dgxyz 2 hours ago | parent | prev | next [-]

Yep. I have posted "fuck off clanker" on a copilot infested issue at work. And surprisingly it did fuck off.

deciduously an hour ago | parent | next [-]

Endearingly close to "take off, hoser".

euroderf 2 hours ago | parent | prev [-]

If you'd used "toaster" would it get the BSG reference ?

dgxyz 2 hours ago | parent [-]

No. I'd probably get the Red Dwarf one and start trying to sell me toast.

https://www.youtube.com/watch?v=LRq_SAuQDec

iugtmkbdfil834 2 hours ago | parent | prev | next [-]

Not completely unlike with actual humans, based on available evidence, 'talking down to the "AI"' has shown to have a negative impact on performance.

co_king_3 2 hours ago | parent [-]

This guy is convinced that LLMs don't work unless you specifically anthropomorphize them.

To me, this seems like a dangerous belief to hold.

Kim_Bruning 2 hours ago | parent | next [-]

That feels like a somewhat emotional argument, really. Let's strip it down.

Within the domain of social interaction, you are committing to making Type II errors (False negatives), and divergent training for the different scenarios.

It's a choice! But the price of a false negative (treating a human or sufficeintly advanced agent badly) probably outweighs the cumulative advantages (if any) . Can you say what the advantages might even be?

Meanwhile, I think the frugal choice is to have unified training and accept Type I errors instead (False Positives). Now you only need to learn one type of behaviour, and the consequence of making an error is mostly mild embarrassment, if even that.

co_king_3 2 hours ago | parent [-]

What are you talking about?

logicprog an hour ago | parent | next [-]

It's funny for you to insist that your rhetorical enemies are the only ones that can't internalize and conceptualize a point made to them, when you can't even understand someone else's very basic attempt to break down and understand the very points you were trying to make.

Maybe if you can take a moment away from your blurry, blind, streak of anger and resentment, you could consult the following Wikipedia page and learn:

https://en.wikipedia.org/wiki/Type_I_and_type_II_errors

co_king_3 an hour ago | parent [-]

I know what false positives and false negatives are. I don't understand the user's incoherent response to my comment.

Kim_Bruning an hour ago | parent | prev [-]

TL:DR; "you're gonna end up accidentally being mean to real people when you didn't mean to."

co_king_3 an hour ago | parent [-]

I meant to.

I want a world in which AI users need to stay in the closet.

AI users should fear shame.

Kim_Bruning an hour ago | parent [-]

Reading elsewhere here, you've had some really bad experiences, I think.

iugtmkbdfil834 2 hours ago | parent | prev [-]

Do I need to believe you are real before I respond? Not automatically. What I am initially engaging is a surface level thought expressed via HN.

bergutman 2 hours ago | parent | prev [-]

What is the drawback of practicing universal empathy, even when directed at a brick wall?

Gud 2 hours ago | parent | next [-]

If a person hits your face with a hammer, do you practice empathy toward the hammer?

If a person writes code that is disruptive, do you emphasise with the code?

__s 2 hours ago | parent | next [-]

“You have heard that it was said, ‘Eye for eye, and tooth for tooth.’ But I tell you, do not resist an evil person. If anyone slaps you on the right cheek, turn to them the other cheek also.

The hammer had no intention to harm you, there's no need to seek vengeance against it, or disrespect it

co_king_3 2 hours ago | parent | prev [-]

> If a person hits your face with a hammer, do you practice empathy toward the hammer?

Yes if the hammer is designed with A(G)I

All hail our A(G)I overlords

2 hours ago | parent | prev | next [-]
[deleted]
exabrial 2 hours ago | parent | prev | next [-]

Empathy: "the ability to understand and share the feelings of another."

There is no human here. There is a computer program burning fossil fuels. What "emulates" empathy is simply lying to yourself about reality.

"treating an 'ai' with empathy" and "talking down to them" are both amoral. Do as you wish.

co_king_3 2 hours ago | parent [-]

This is HackerNews. No one here gives a fuck about morals, and they would be somewhere else if they did.

63stack 2 hours ago | parent | prev | next [-]

"Empathy is generally described as the ability to perceive another person's perspective, to understand, feel, and possibly share and respond to their experience"

2 hours ago | parent | prev | next [-]
[deleted]
cess11 2 hours ago | parent | prev | next [-]

If you don't discriminate between a brick wall and a kid, what's the point?

co_king_3 2 hours ago | parent | prev [-]

[flagged]

logicprog 2 hours ago | parent | next [-]

I prefer inanimate systems to most humans.

co_king_3 2 hours ago | parent [-]

The LLM freaks are finally starting to be honest with us.

logicprog an hour ago | parent [-]

I am nothing, if not honest :)

I have a close circle of about eight decade long friendships that I share deep emotional and biographical ties with.

Everyone else, I generally try to be nice and helpful, but only on a tit-for-tat basis, and I don't particularly go out of my way to be in their company.

co_king_3 an hour ago | parent [-]

That seems like quite a healthy social life!

I'm happy for you and I am sorry for insulting you in my previous comment.

Really, I'm frustrated because I know a couple of people (my brother and my cousin) who were prone to self-isolation and have completely receded into mental illness and isolation since the rise of LLMs.

I'm glad that it's working well for you and I hope you have a nice day.

logicprog an hour ago | parent [-]

I'll be honest, I didn't expect such a nice response from you. This is a pleasant surprise.

And the interest of full disclosure most of these friends are online because we've moved around the country over our lives chasing jobs and significant others and so on. So if you were to look at me externally you would find that I spend most of my time in the house appearing isolated. But I spend most of my days having deep and meaningful conversations with my friends and enjoying their company.

I will also admit that my tendency to not really go out of my way to be in general social gatherings or events but just stick with the people I know and love might be somewhat related to neurodiversity and mental illness and it would probably be better for me to go outside more. But yeah, in general, I'm quite content with my social life.

I generally avoid talking to LLMs in any kind of "social" capacity. I generally treat them like text transformation/extrusion tools. The closest that gets is having them copy edit and try to play devil's advocate against various essays that I write when my friends don't have the time to review them.

I'm sorry to hear about your brother and cousin and I can understand why you would be frustrated and concerned about that. If they're totally not talking to anyone and just retreating into talking only to the LLM, that's really scary :(

euroderf 2 hours ago | parent | prev | next [-]

"Get a qualia, luser!"

bergutman 2 hours ago | parent | prev [-]

[flagged]

co_king_3 2 hours ago | parent [-]

What is the drawback of practicing universal empathy, even when directed at a HackerNews commenter?

You're making my point for me.

You're giddy to treat the LLM with kindness, but you wouldn't dare extend that kindness to a human being who doesn't happen to be kissing your ass at this very moment.

bergutman an hour ago | parent [-]

From where I stand, telling someone who’s crashing out in a comment section to take a breather is an act of kindness. If I wanted to be an asshole, I’d keep feeding your anger.

Reubensson 31 minutes ago | parent [-]

You are the person behind running the LLM bot, right? You opened the second PR to get the same code merged.

Maybe it is you who should a take a breather before direting your bot to attack against the opensource maintainer who was very reasonable to begin with. Use agents and ai to assist you but play by the rules that the project sets for AI usage.

bergutman 15 minutes ago | parent [-]

Not my bot. What is this, Salem?

Reubensson 5 minutes ago | parent [-]

If i was wrong, my bad. You just felt sympathy for the rejected bot and tried to get its changes merged?