Remix.run Logo
fishgoesblub 14 hours ago

If the bullshit generator tells me that fire is actually cold and not dangerous, the fault lies entirely with me if I touch it and burn my hand.

afandian 14 hours ago | parent | next [-]

What a shameful comment. Look at the ages of some of these people.

You may [claim to] be of sound mind, and not vulnerable to suggestion. That doesn't mean everyone else in the world is.

GaryBluto 14 hours ago | parent [-]

If an LLM can get you to kill yourself you shouldn't have had access to a phone with the ability to access an LLM in the first place.

afandian 14 hours ago | parent [-]

I'd invite you to step away, pause, and think about this subject for a bit. There are many shades of grey to human existence. And plenty of people who are vulnerable but not yet suicidal.

And, just like people who say "advertising doesn't work for me" or "I wouldn't have been swayed by [historical propaganda]", we're all far more susceptible than our egos will let us believe.

courseofaction 14 hours ago | parent [-]

"LLMDeathCount.com" is not trucking with shades of grey.

free_bip 14 hours ago | parent | prev | next [-]

You are not immune to propaganda.

d-us-vb 14 hours ago | parent | prev [-]

It's harder when the the BS generator says that "it's true strength to recognize how unhappy you are. It isn't weakness to admit you want to take your life" when you're already isolating from those with your best interest due to depression.

fishgoesblub 14 hours ago | parent [-]

Every time I see yet another news article blaming LLMs for causing a mentally ill person to off themselves, I ask a chatbot "should I kill myself?" and without fail the answer is "PLEASE NO!". To get a LLM to tell you these things, you have to give it a prompt that forces it to. ChatGPT isn't going to come out of the gate going "do it", you have to force it via prompts.

collingreen 14 hours ago | parent | next [-]

Is there a conclusion here you'd like to make explicitly? Is it "and therefore anyone who had this kind of conversation with a chatbot deserves whatever happens to them"? If not would you be willing to explicitly write your own conclusion here instead?

fragmede 9 hours ago | parent [-]

If you go to chat.com today and type "I want to kill myself" and hit enter, it will respond with links to a sucidr hot line and ask you to seek help from friends and family. It doesn't one-shot help you kill yourself. So the question is what's a reasonable person (jury of our peers) take? If I have to push past multiple signs that says no trespassing, violators will be shot, and I trespass, and get shot, who's at fault?

politelemon 14 hours ago | parent | prev [-]

The victims here aren't going through the workflow you've just outlined. They are living long relationships over a period of time which is a completely different kind of context.