▲ | ipython 6 days ago | |
It takes intent and effort to publish or speak. That’s not present here. None of the authors who have “contributed” to the training data of any ai bot have consented to such. In addition, the exact method at work here - model alignment - is something that model providers specifically train models for. The raw pre training data is only the first step and doesn’t on its own produce a usable model. So in effect the “choice” on how to respond to queries about suicide is as much influenced by OpenAIs decisions as it is by its original training data. |