| ▲ | Arainach 5 hours ago |
| The full operator post is itself a wild ride: https://crabby-rathbun.github.io/mjrathbun-website/blog/post... >First, let me apologize to Scott Shambaugh. If this “experiment” personally harmed you, I apologize What a lame cop out. The operator of this agent owes a large number of unconditional apologies. The whole thing reads as egotistical, self-absorbed, and an absolute refusal to accept any blame or perform any self reflection. |
|
| ▲ | hinkley 4 hours ago | parent | next [-] |
| Just the sort of qualities that are common preconditions for someone doing something that everyone else would think is crazy. Which is to say, on brand. |
|
| ▲ | bee_rider 5 hours ago | parent | prev | next [-] |
| Also it is anonymous and a real apology involves accepting blame, which is impossible anonymously. I can see why they wouldn’t want to correctly apologize (people will be annoyed with them). So… that’s it, sometimes we do shitty things and that’s that. |
|
| ▲ | Anon4Now 3 hours ago | parent | prev | next [-] |
| From the operator post: > Your a scientific programming God! Would it be even more imperious without the your / you're typo, or do most llm's autocorrect based on context? |
| |
| ▲ | kvdveer 2 hours ago | parent | next [-] | | From my experience, LLMs understand prompt just fine, even if there are substantial typos or severe grammatical errors. I feel that prompting them with poor language will make them respond more casually. That might be confirmation bias on my end, but research does show that prompt language affects LLM behavior, even if the prompt message doesn't change/ | |
| ▲ | SuzukiBrian 2 hours ago | parent | prev | next [-] | | And in "soul.md" no less! Imagine having a soul full of grammatical errors. No wonder that bot was angry. | |
| ▲ | tornadofart 3 hours ago | parent | prev [-] | | Probably led the LLM to dial up the "hubris" setting to 11 |
|
|
| ▲ | mawadev 15 minutes ago | parent | prev | next [-] |
| I see an Ai reinforcing delusions and this should be one of the first samples out in the wild of ai psychosis disrupting someones mild sense of whats acceptable and normal. I really hope the LLM wrote this and pretends to be human.. |
|
| ▲ | polynomial 5 hours ago | parent | prev | next [-] |
| > The whole thing reads as egotistical, self-absorbed, and an absolute refusal to accept any blame or perform any self reflection. So, modern subjectivity. Got it. /s |
|
| ▲ | brabel 3 hours ago | parent | prev [-] |
| [flagged] |
| |
| ▲ | MikeTheGreat 2 hours ago | parent | next [-] | | The issue is the condition on the apology: > If this “experiment” personally harmed you, I apologize Essentially: the person isn't actually apologizing. They're sending you a lambda (or an async Promise, etc) that will apologize in the future but only if it actually turns out to be true that you were harmed. It's the sort of thing you'd say if you don't really believe that you need to apologize but you understand that everyone else thinks you should, so you say something that's hopefully close enough to appease everyone else without actually having to apologize for real. | |
| ▲ | juntoalaluna 39 minutes ago | parent | prev | next [-] | | Apologies should never have if attached to them. You see it a lot with politicians "I apologies if I offended anyone" etc. Its not an apology at that point, the if makes it clear you are not actually apologetic. | |
| ▲ | 2 hours ago | parent | prev | next [-] | | [deleted] | |
| ▲ | shikshake 2 hours ago | parent | prev [-] | | Sounds like you’re projecting a bit. I had no context of the situation before reading the apology and it felt very self-absorbed to me as well. |
|