Remix.run Logo
mothballed 6 days ago

Because that's the factual bounds of the law, in places where suicide is illegal. ChatGPT is just being the 4chan chatbot, if you don't like that roleplaying suicide is OK then you're going to have to amend the first amendment.

PostOnce 6 days ago | parent [-]

The constitution grants no rights to robots, and they have no freedom of speech, so no amendment is necessary.

mothballed 6 days ago | parent [-]

The constitution grants no rights to books, and they have no freedom of speech, so no amendment is necessary.

podgietaru 6 days ago | parent [-]

What? Is this deliberately obtuse?

Books are not granted freedom of speech, authors are. Their method is books. This is like saying sound waves are not granted freedom of speech.

Unless you're suggesting there's a man sat behind every ChatGPT chat your analogy is nonsense.

mothballed 6 days ago | parent [-]

Yes I am saying there is a man "sat" as it were behind every ChatGPT chat. The authors of ChatGPT basically made something closer to a turing-complete "choose-your-own adventure" book. They ensured you could choose an adventure where the reader can choose a suicide roleplay adventure, but it is up to the reader whether they want to flip to that page. If they want to flip to the page that says "suicide" then it will tell them exactly what the law is, they can only do a suicide adventure if it is a roleplaying story.

By banning chatGPT you infringe upon the speech of the authors and the client. Their "method of speech" as you put it in this case is ChatGPT.

ipython 6 days ago | parent | next [-]

It takes intent and effort to publish or speak. That’s not present here. None of the authors who have “contributed” to the training data of any ai bot have consented to such.

In addition, the exact method at work here - model alignment - is something that model providers specifically train models for. The raw pre training data is only the first step and doesn’t on its own produce a usable model.

So in effect the “choice” on how to respond to queries about suicide is as much influenced by OpenAIs decisions as it is by its original training data.

jojomodding 6 days ago | parent | prev | next [-]

There are consequences to speech. If you and I are in conversation and you convince me (repeatedly, over months, eventually successfully) to commit suicide then you will be facing a wrongful death lawsuit. If you publicize books claiming known falsehoods about my person, you'll be facing a libel lawsuit. And so on.

If we argue that chatbots are considered constitutionally protected speech of their programmers or whatever, then the programmers should in turn be legally responsible. I guess this is what this lawsuit mentioned in the article is about. The principle behind this is not just about suicide but also about more mundane things like the model hallucinating falsehoods about public figures, damaging their reputation.

mothballed 6 days ago | parent [-]

I don't see how this goes any other way. The law is not going to make some 3rd rail for AI.

imtringued 6 days ago | parent | prev [-]

The author is the suicidal kid in this case though.