▲ | mathiaspoint 6 days ago | |||||||
What's your argument here? Hosted LLM service shouldn't exist because they might read people's bad ideas back to them? ChatGPT has enough guardrails now that it often refuses productive prompts. It's actually very very hard to get it to do what this person did, arguably impossible to do unintentionally. | ||||||||
▲ | broker354690 5 days ago | parent [-] | |||||||
ChatGPT is a service and thus OpenAI should be exposed to even more liability than if they had sold the LLM to the user to be accessed offline. If the user had been running a local LLM, OpenAI would not have been responsible for generating the speech. As it stands, the human beings called OpenAI willingly did business with this child, and willingly generated the speech that persuaded him to kill himself and sent it to him. That they used a computer to do so is irrelevant. | ||||||||
|