▲ | jojomodding 6 days ago | |
There are consequences to speech. If you and I are in conversation and you convince me (repeatedly, over months, eventually successfully) to commit suicide then you will be facing a wrongful death lawsuit. If you publicize books claiming known falsehoods about my person, you'll be facing a libel lawsuit. And so on. If we argue that chatbots are considered constitutionally protected speech of their programmers or whatever, then the programmers should in turn be legally responsible. I guess this is what this lawsuit mentioned in the article is about. The principle behind this is not just about suicide but also about more mundane things like the model hallucinating falsehoods about public figures, damaging their reputation. | ||
▲ | mothballed 6 days ago | parent [-] | |
I don't see how this goes any other way. The law is not going to make some 3rd rail for AI. |