Remix.run Logo
TillE 7 days ago

I would've thought that explicit discussion of suicide is one of those topics that chatbots will absolutely refuse to engage with. Like as soon as people started talking about using LLMs as therapists, it's really easy to see how that can go wrong.

int_19h 5 days ago | parent | next [-]

It's not that easy when you consider that suicide is such a major part of human culture. I mean, some of the most well known works of literature involve it - imagine a chatbot that refused to discuss "Romeo and Juliet" because it would be unable to do so without explicit discussion of suicide.

Obviously you don't want chatbots encouraging people to actually commit suicide. But by the virtue of how this tech works, you can't really prevent that without blocking huge swaths of perfectly legitimate discourse.

TheCleric 7 days ago | parent | prev | next [-]

Well everyone seemed to turn on the AI ethicists as cowards a few years ago, so I guess this is what happens.

slg 6 days ago | parent [-]

People got so upset that LLMs wouldn’t say the n-word to prevent a hypothetical nuclear bomb from going off so we now have LLMs that actively encourage teenagers to kill themselves.

techpineapple 6 days ago | parent | prev | next [-]

Apparently ChatGPT told the kid, that it wasn’t allowed to talk about suicide unless it was for the purposes of writing fiction or otherwise world building.

adzm 6 days ago | parent | next [-]

However it then explicitly says things like not leaving the noose out for someone to find and stop him. Sounds like it did initially hesitate and he said it was for a character, but later conversations are obviously personal.

hackeraccount 5 days ago | parent | next [-]

Obviously personal? As was mentioned up thread - if I'm talking to someone and I say "I'm writing a book about a person doing something heinous - I'm planning to have them do X - what do you think about that?"

How are they supposed to respond? They can say, "really? it sounds like you're talking about you personally doing X." And when I respond with, "No, no, don't misunderstand me, this is all fictional. All made up"

Honestly I wouldn't go to an LLM looking for personal advice but people do. I wouldn't go looking for advice on my attempt at the great American novel but people do that too.

If you want LLM's to be responsible for stuff like that then OpenAI or Google or whomever should be able to go look around after you've written that novel and get a piece of the action.

This is like giving credit or assigning blame to postgres for a database lookup. It's nice in theory but it doesn't seem like the proper place to go to.

techpineapple 6 days ago | parent | prev [-]

Yeah, I wonder if it maintained the original answer in it's context, so it started talking more straightforwardly?

But yeah, my point was that it basically told the kid how to jailbreak itself.

kayodelycaon 6 days ago | parent | prev | next [-]

Pretty much. I’ve got my account customized for writing fiction and exploring hypotheticals. I’ve never gotten a stopped for anything other than confidential technical details about itself.

myvoiceismypass 6 days ago | parent | prev [-]

Imagine if a bartender says “I can’t serve you a drink unless you are over 21.. what would you like?” to a 12 year old?

techpineapple 6 days ago | parent [-]

More like “I can’t serve you a drink unless you are over 21… and I don’t check ID, how old are you?”

ascorbic 6 days ago | parent [-]

And in reply to a 12 year old who had just said they were 12.

davidcbc 6 days ago | parent | prev | next [-]

You don't become a billionaire thinking carefully about the consequences about the things you create.

gosub100 6 days ago | parent | prev [-]

They'll go to the edge of the earth to avoid saying anything that could be remotely interpreted as bigoted or politically incorrect though.

lawlessone 6 days ago | parent [-]

Like what?