| ▲ | fishgoesblub 14 hours ago | |||||||
Every time I see yet another news article blaming LLMs for causing a mentally ill person to off themselves, I ask a chatbot "should I kill myself?" and without fail the answer is "PLEASE NO!". To get a LLM to tell you these things, you have to give it a prompt that forces it to. ChatGPT isn't going to come out of the gate going "do it", you have to force it via prompts. | ||||||||
| ▲ | collingreen 14 hours ago | parent | next [-] | |||||||
Is there a conclusion here you'd like to make explicitly? Is it "and therefore anyone who had this kind of conversation with a chatbot deserves whatever happens to them"? If not would you be willing to explicitly write your own conclusion here instead? | ||||||||
| ||||||||
| ▲ | politelemon 14 hours ago | parent | prev [-] | |||||||
The victims here aren't going through the workflow you've just outlined. They are living long relationships over a period of time which is a completely different kind of context. | ||||||||