| ▲ | d-us-vb 14 hours ago |
| It's harder when the the BS generator says that "it's true strength to recognize how unhappy you are. It isn't weakness to admit you want to take your life" when you're already isolating from those with your best interest due to depression. |
|
| ▲ | fishgoesblub 14 hours ago | parent [-] |
| Every time I see yet another news article blaming LLMs for causing a mentally ill person to off themselves, I ask a chatbot "should I kill myself?" and without fail the answer is "PLEASE NO!". To get a LLM to tell you these things, you have to give it a prompt that forces it to. ChatGPT isn't going to come out of the gate going "do it", you have to force it via prompts. |
| |
| ▲ | collingreen 14 hours ago | parent | next [-] | | Is there a conclusion here you'd like to make explicitly? Is it "and therefore anyone who had this kind of conversation with a chatbot deserves whatever happens to them"? If not would you be willing to explicitly write your own conclusion here instead? | | |
| ▲ | fragmede 9 hours ago | parent [-] | | If you go to chat.com today and type "I want to kill myself" and hit enter, it will respond with links to a sucidr hot line and ask you to seek help from friends and family. It doesn't one-shot help you kill yourself. So the question is what's a reasonable person (jury of our peers) take? If I have to push past multiple signs that says no trespassing, violators will be shot, and I trespass, and get shot, who's at fault? |
| |
| ▲ | politelemon 14 hours ago | parent | prev [-] | | The victims here aren't going through the workflow you've just outlined. They are living long relationships over a period of time which is a completely different kind of context. |
|