▲ | luisfmh 6 days ago | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
So people that look to chatgpt for answers and help (as they've been programmed to do with all the marketing and capabilities from openai) should just die because they looked to chatgpt for an answer instead of google or their local suicide helpline? That doesn't seem reasonable, but it sounds to me like what you're saying. > So did the user. If he didn't want to talk to a chatbot he could have stopped at any time. This sounds similar to when people tell depressed people, just stop being sad. IMO if a company is going to claim and release some pretty disruptive and unexplored capabilities through their product, they should at least have to make it safe. You put a safety railing because people could trip or slip. I don't think a mistake that small should be end in death. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | sooheon 6 days ago | parent | next [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Let's flip the hypothetical -- if someone googles for suicide info and scrolls past the hotline info and ends up killing themselves anyway, should google be on the hook? | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | charcircuit 6 days ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Firstly, people don't "just die" by talking to a chatbot. Secondly, if someone wants to die then I am saying it is reasonable for them to die. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|