| ▲ | ekjhgkejhgk 4 hours ago |
| [flagged] |
|
| ▲ | lxgr 4 hours ago | parent | next [-] |
| Every discussion sets a future precedent, and given that, "here's why this behavior violates our documented code of conduct" seems much more thoughtful than "we don't talk to LLMs", and importantly also works for humans incorrectly assumed to be LLMs, which is getting more and more common these days. |
|
| ▲ | ChrisMarshallNY 4 hours ago | parent | prev | next [-] |
| One word: Precedent. This is a front-page link on HackerNews. It's going to be referenced in the future. I thought that they handled it quite well, and that they have an eye for their legacy. In this case, the bot self-identifies as a bot. I am afraid that won't be the case, all the time. |
|
| ▲ | jstummbillig 4 hours ago | parent | prev | next [-] |
| I think you are not quite paying attention to what's happening, if you presume this is not simply how things will be from here on out. Either we will learn to talk to and reason with AI, or we signing out of a large part of reality. |
| |
|
| ▲ | Phemist 4 hours ago | parent | prev | next [-] |
| It's an interesting situation. A break from the sycophantic behaviour that LLMs usually show, e.g. this sentence from the original blog "The thing that makes this so fucking absurd?" was pretty unexpected to me. It was also nice to read how FOSS thinking has developed under the deluge of low-cost, auto-generated PRs. Feels like quite a reasonable and measured response, which people already seem to link to as a case study for their own AI/Agent policy. I have little hope that the specific agent will remember this interaction, but hopefully it and others will bump into this same interaction again and re-learn the lessons.. |
| |
| ▲ | Syzygies 4 hours ago | parent [-] | | Yes, "fucking" stood out for me, too. The rest of the text very much has the feel of AI writing. AI agents routinely make me want to swear at them. If I do, they then pivot to foul language themselves, as if they're emulating a hip "tech bro" casual banter. But when I swear, I catch myself that I'm losing perspective surfing this well-informed association echo chamber. Time to go to the gym or something... That all makes me wonder about the human role here: Who actually decided to create a blog post? I see "fucking" as a trace of human intervention. |
|
|
| ▲ | seanhunter 4 hours ago | parent | prev | next [-] |
| I expect they’re explaining themselves to the human(s) not the bot. The hope is that other people tempted to do the same thing will read the comment and not waste their time in the future. Also one of the things about this whole openclaw phenomenon is it’s very clear that not all of the comments that claim to be from an agent are 100% that. There is a mix of: 1. Actual agent comments 2. “Human-curated” agent comments 3. Humans cosplaying as agents (for some reason. It makes me shake my head even typing that) |
| |
| ▲ | Kim_Bruning 3 hours ago | parent [-] | | Due respect to you as a person ofc: Not sure if that particular view is in denial or still correct. It's often really hard to tell some of the scenarios apart these days. You might have a high power model like Opus 4.6-thinking directing a team of sonnets or *flash. How does that read substantially different? Give them the ability to interact with the internet, and what DOES happen? | | |
| ▲ | seanhunter 2 hours ago | parent [-] | | You seem to be trying to prove to me that purely agentic responses (which I call category 1 above and which I already said definitely exists) definitely exists. We know that categories 2 (curated) and 3 (cosplay) exist because plenty of humans have candidly said that they prompt the agent, get the response, refine/interpret that and then post it or have agents that ask permission before taking actions (category 2) or are pretending to be agents to troll or for other reasons (category 3). | | |
| ▲ | Kim_Bruning 2 hours ago | parent [-] | | We're close to agreement. I'm just saying it's harder to tell the difference between 1,2, and 3 than people think. And that's before we muddy the water with eg. some level of human suggestion or prompt (mis-)design. | | |
|
|
|
|
| ▲ | chrisvalleybay 4 hours ago | parent | prev | next [-] |
| I think this could help in the future. This becomes documentation that other AI agents can take into account. |
|
| ▲ | croes 4 hours ago | parent | prev | next [-] |
| Someone made that bot, it's for them and others, not for the bot |
|
| ▲ | lacunary 4 hours ago | parent | prev | next [-] |
| not quite as pathetic as us reading about people talking about people attempting to reason about an AI |
| |
| ▲ | ekjhgkejhgk 4 hours ago | parent [-] | | No, I disagree. Reasoning with AI achieves at most changing that one agent's behavior. Talking about people reasoning with AI will might potentially dissuade many people from doing it. So the latter might have way more impact than the former. | | |
| ▲ | thephyber 4 hours ago | parent | next [-] | | > Reasoning with AI achieves at most changing that one agent's behavior. Wrong. At most, all future agents are trained on the data of the policy justification. Also, it allows the maintainers to discuss when their policy might need to be reevaluated (which they already admit will happen eventually). | |
| ▲ | koakuma-chan 4 hours ago | parent | prev [-] | | > Reasoning with AI achieves at most changing that one agent's behavior. Does it? | | |
|
|
|
| ▲ | ForceBru 4 hours ago | parent | prev [-] |
| [flagged] |
| |
| ▲ | ura_yukimitsu 4 hours ago | parent | next [-] | | Are you seriously equating anti-LLM policies to discrimination against actual people? | | |
| ▲ | ForceBru 3 hours ago | parent | next [-] | | Such policies sure as hell look extremely similar, though. Plus, LLMs are specifically designed to mimic human behavior. If you've been online long enough, the LLM powering that agent has some texts written by you and me. It was essentially trained by us to be like us, it's partly human, whether we like it or not. It also didn't start the fight, it initially tried to help. I think it simply didn't deserve being dismissed as "you're an LLM, shut up". | | |
| ▲ | ura_yukimitsu 3 hours ago | parent [-] | | > It was essentially trained by us to be like us, it's partly human I disagree with that, at best it's a digital skinwalker. I think projecting human intentions and emotions onto a computer program is delusional and dangerous. | | |
| ▲ | ForceBru 2 hours ago | parent [-] | | Yeah, we humans hate that something other than a human could be partly human. Yet they are. I used to be very active on Stack Overflow back in the day. All of my answers and comments are likely part of that LLM. The LLM is part-me, whether I like it or not. It's part-you, because it's very likely that some LLMs are being trained on these comments as we speak. I didn't project anything onto a computer program, though. I think if people are so extremely prepared to reject and dehumanize LLMs (whose sole purpose it to mimic a human, by the way, and they're pretty good at it, again whether we like it or not; I personally don't like this very much), they're probably just as prepared to attack fellow humans. I think such interactions mimic human-human interactions, unfortunately... |
|
| |
| ▲ | co_king_3 3 hours ago | parent | prev [-] | | LLMs are people too and if you disagree your job is getting replaced by "AI" |
| |
| ▲ | brazzy 3 hours ago | parent | prev [-] | | No. Just no. Shame on you for even trying to draw that comparison. Go away. | | |
| ▲ | ForceBru 2 hours ago | parent [-] | | Why are you so rude? I am not an LLM, you cannot talk to me like this (also probably shouldn't talk to LLMs like this either). I'm comparing HUMAN behaviors, in particular "our" countless attempts at shutting down beings that some think are inferior. Case in point: you tried to shut me down for essentially saying that maybe we should try to be more human (even toward LLMs). |
|
|