▲ | broker354690 5 days ago | |
ChatGPT is a service and thus OpenAI should be exposed to even more liability than if they had sold the LLM to the user to be accessed offline. If the user had been running a local LLM, OpenAI would not have been responsible for generating the speech. As it stands, the human beings called OpenAI willingly did business with this child, and willingly generated the speech that persuaded him to kill himself and sent it to him. That they used a computer to do so is irrelevant. | ||
▲ | mathiaspoint 5 days ago | parent [-] | |
There isn't anything they could have practically done to prevent this except not allowing kids to use it. They may have chosen not to age restrict it because 1) It's really not practical to do that effectively 2) more importantly (and they seem to care about this more than most companies) it would push kids to less safe models like those used on character.ai What OpenAI does now is what trying to make AI safe looks like. Most of the people arguing for "accountability" are de facto arguing for a wild west situation. |