| ▲ | broker354690 6 days ago |
| Why isn't OpenAI criminally liable for this? Last I checked: -Signals emitted by a machine at the behest of a legal person intended to be read/heard by another legal person are legally classified as 'speech'. -ChatGPT is just a program like Microsoft Word and not a legal person. OpenAI is a legal person, though. -The servers running ChatGPT are owned by OpenAI. -OpenAI willingly did business with this teenager, letting him set up an account in exchange for money. This business is a service under the control of OpenAI, not a product like a knife or gun. OpenAI intended to transmit speech to this teenager. -A person can be liable (civilly? criminally?) for inciting another person's suicide. It is not protected speech to persuade someone into suicide. -OpenAI produced some illegal speech and sent it to a suicidal teenager, who then committed suicide. If Sam Altman stabbed the kid to death, it wouldn't matter if he did it on accident. Sam Altman would be at fault. You wouldn't sue or arrest the knife he used to do the deed. Any lawyers here who can correct me, seeing as I am not one? It seems clear as day to me that OpenAI/Sam Altman directly encouraged a child to kill themselves. |
|
| ▲ | worldsavior 6 days ago | parent | next [-] |
| You could also blame Wikipedia for providing suicidal methods for historic reasons or other. Whoever roams the internet is at it's own responsibility. Of course OpenAI is at fault here also, but this is a fight that will never end, and without any seriously valid justification. Just like AI is sometimes bad at coding, same for psychology and other areas where you double check AI. |
| |
| ▲ | _Algernon_ 6 days ago | parent | next [-] | | Describing methods in the abstract is different to engaging in argument with a specific individual over a period of time, encouraging them to do it. No Wikipedia page does that. | | | |
| ▲ | esalman 5 days ago | parent | prev [-] | | I am parent to a 4yo. I am also fairly well versed in development and usage of AI and LLM. When I want an LLM to do something but it won't, I know various ways to bypass that. If my son is using AI, which he probably will when he is close to middle school age anyway, I will take care to teach him how to use AI responsibly. He'll be smart enough to know how to bypass, but I'll do my best to teach him when to bypass and when not to bypass. That is if the current state of the art and also AI legislation etc. holds. But I'm just one parent, I have an engineering degree, a PhD, coding, mathematical, and analytical skills. I'm a very small minority. The vast majority of parents out there do not know what's going to hit there kids and how, or they will have very skewed idea about it. OpenAI should have been the one here to guide a child not to bypass AI and use it responsibily. They did not. No matter how anyone twist the facts, that's the reality here and the child died. | | |
| ▲ | ivape 5 days ago | parent [-] | | This is just an escalation. We didn’t know what would happen if we let the kids get the internet, tv, video games, and porn. We can’t even assess it in 2025 because it’s all normalized. In a few years, AI will be normalized too. Things will keep escalating and we won’t know because of the normalization. Only in the briefest moments like today, where we’re just before everything changes. |
|
|
|
| ▲ | mathiaspoint 6 days ago | parent | prev | next [-] |
| What's your argument here? Hosted LLM service shouldn't exist because they might read people's bad ideas back to them? ChatGPT has enough guardrails now that it often refuses productive prompts. It's actually very very hard to get it to do what this person did, arguably impossible to do unintentionally. |
| |
| ▲ | broker354690 5 days ago | parent [-] | | ChatGPT is a service and thus OpenAI should be exposed to even more liability than if they had sold the LLM to the user to be accessed offline. If the user had been running a local LLM, OpenAI would not have been responsible for generating the speech. As it stands, the human beings called OpenAI willingly did business with this child, and willingly generated the speech that persuaded him to kill himself and sent it to him. That they used a computer to do so is irrelevant. | | |
| ▲ | mathiaspoint 5 days ago | parent [-] | | There isn't anything they could have practically done to prevent this except not allowing kids to use it. They may have chosen not to age restrict it because 1) It's really not practical to do that effectively 2) more importantly (and they seem to care about this more than most companies) it would push kids to less safe models like those used on character.ai What OpenAI does now is what trying to make AI safe looks like. Most of the people arguing for "accountability" are de facto arguing for a wild west situation. |
|
|
|
| ▲ | VirusNewbie 6 days ago | parent | prev | next [-] |
| Is Google responsible if someone searches for a way to kill themselves, finds the means, and does it? What about the ISP, that actually transferred the bits? What about the forum, that didn't take down the post? |
| |
| ▲ | Towaway69 6 days ago | parent | next [-] | | What if Google is responsible? What if the tech industry, instead of just “interrupting” various industries, would also take the responsibilities of this interruptions. After all, if I asked my doctor for methods of killing myself, that doctor would most certainly have a moral if not legal responsibility. But if that doctor is a machine with software then there isn't the same responsibility? Why? | | |
| ▲ | Levitz 6 days ago | parent | next [-] | | Because it is a machine and has no agency. Same as why if you ask someone to stab you and they do they are liable for it, but if you do it yourself you don't get to blame the knife manufacturer. | | |
| ▲ | lewiscollard 6 days ago | parent | next [-] | | At every step there is human agency involved. People came up with the idea, people wrote the code, people deployed the code, people saw the consequences and were like "this is fine". This is why people hate us. It's like Schrodinger's Code: we don't want responsibility for the code we write, except we very much do want to make a pile of money from it as if we were responsible for it, and which of those you get depends on whether the observer is one who notices that code has bad consequences or whether it's our bank account. This is more like building an autonomous vehicle "MEGA MASHERBOT 5000" with a dozen twenty-feet-wide spinning razor-sharp blades weighing fifty tons each, setting it down a city street, watching it obliterate people into bloody chunks and houses into rubble and being like "well, nobody could have seen that coming" - two seconds before we go collect piles of notes from the smashed ATMs. | |
| ▲ | _Algernon_ 6 days ago | parent | prev [-] | | >[B]ureaucrats can be expected to embrace a technology that helps to create the illusion that decisions are not under their control. Because of its seeming intelligence and impartiality, a computer has an almost magical tendency to direct attention away from the people in charge of bureaucratic functions and toward itself, as if the computer were the true source of authority. A bureaucrat armed with a computer is the unacknowledged legislator of our age, and a terrible burden to bear. We cannot dismiss the possibility that, if Adolf Eichmann had been able to say that it was not he but a battery of computers that directed the Jews to the appropriate crematoria, he might never have been asked to answer for his actions. Neil Postman, Technopoly Entities shouldn't be able to outsource liability for their decisions or actions — including the action of releasing stochastic parrots on society at large — on computers. We have precedent that occupations which make important decisions that put lives at risk (doctors, ATC, engineers for example) can be held accountable for the consequences of their actions if it is the result of negligence. Maybe it's time to see include computer engineers in that group. They've been allowed to move fast and break things for way too long. |
| |
| ▲ | VirusNewbie 5 days ago | parent | prev | next [-] | | Then libraries are responsible when a book inspires bad behavior or teaches someone to do harm. | |
| ▲ | killerstorm 6 days ago | parent | prev [-] | | "Google is responsible" is equivalent to "let's burn bad books". | | |
| ▲ | Towaway69 6 days ago | parent | next [-] | | Ok why do we humans have to take responsibility? Why can't I declare myself to be a "company" (which is only responsible to shareholders) and then never take resosibility for my actions? Why are we humans burdened down with moral responsibility while companies can jerk that responsibility because .... well because they are a "company". After all, companies are made of what? People. People make companies and people take responsibility but somewhere along the line, these responsibilities can be dumped. | | | |
| ▲ | afavour 6 days ago | parent | prev | next [-] | | It really isn't. Google decides what to suggest to a user and at what priority. A bookshelf does no such thing. | |
| ▲ | _Algernon_ 6 days ago | parent | prev [-] | | The absence of amplification is not not the same as eliminating it. |
|
| |
| ▲ | hiddencost 6 days ago | parent | prev | next [-] | | Google is actually quite good at this. They've very aggressively pursued protections around self harm. Google probably would not be held liable because they could extensively document that they put forth all reasonable effort to prevent this. My understanding is that OpenAI's protections are weaker. I'm guessing that will change now. | |
| ▲ | wolvesechoes 6 days ago | parent | prev | next [-] | | Driver that shipped alcohol to the store is not responsible for the fact that clerk sold it to some kid. Clerk still is. | |
| ▲ | broker354690 5 days ago | parent | prev [-] | | OpenAI's ChatGPT is not an intermediary. Google finds you speech that was produced by someone else. ISPs ferry the speech to your modem. Forums aggregate speech from actual humans who produced that speech and enable you to access that speech. The group of human beings known as OpenAI is directly responsible for generating the speech that was sent to this child, and they intentionally did business with this child. They did not conduct speech from one entity to another- they produced the speech themselves. |
|
|
| ▲ | rideontime 6 days ago | parent | prev | next [-] |
| Perhaps this is being downvoted due to the singling out of Sam Altman. According to the complaint, he personally ordered that the usual safety tests be skipped in order to release this model earlier than an upcoming Gemini release, tests that allegedly would catch precisely this sort of behavior. If these allegations hold true, he’s culpable. |
| |
| ▲ | broker354690 6 days ago | parent [-] | | I would go further than that and question whether or not the notions of "safety" and "guardrails" have any legal meaning here at all. If I sold a bomb to a child and printed the word "SAFE" on it, that wouldn't make it safe. Kid blows himself up, no one would be convinced of the bomb's safety at the trial. Likewise, where's the proof that sending a particular input into the LLM renders it "safe" to offer as a service in which it emits speech to children? |
|
|
| ▲ | blackqueeriroh 6 days ago | parent | prev | next [-] |
| Section 230, without which Hacker News wouldn’t exist. |
| |
| ▲ | CGamesPlay 6 days ago | parent | next [-] | | Can you outline how that applies? OpenAI did not provide information of another information content provider, so I fail to see how it's relevant. > No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. | |
| ▲ | _Algernon_ 6 days ago | parent | prev | next [-] | | >In the United States, Section 230 is a section of the Communications Act of 1934 that was enacted as part of the Communications Decency Act of 1996, which is Title V of the Telecommunications Act of 1996, and generally provides immunity for online computer services with respect to third-party content generated by their users. (Emphasis mine) So either the content is user generated and their training of the model should be copyright infringement, or it's not and Section 230 does not apply and this is speech for which Open AI is responsible. | |
| ▲ | slipperydippery 6 days ago | parent | prev | next [-] | | It wasn’t some other user posting this. Their own software was generating the responses. That’s not 230. | |
| ▲ | broker354690 5 days ago | parent | prev | next [-] | | Section 230? I didn't know the outputs of ChatGPT were content generated by random users on the net. Does this mean AI truly stands for 'Actual Indians'? Preposterous. | |
| ▲ | pengaru 6 days ago | parent | prev | next [-] | | If Section 230 protects this activity, then "Gen AI" output must be copyright violating plagiarism. If it's not plagiarism, then OpenAI is on the hook. | |
| ▲ | jrflowers 5 days ago | parent | prev [-] | | I like this post because the implication is that there’s a law called Section 230 that means everything is legal online. Like “that would be totally illegal but it happened on the information superhighway, section 230 baybee!!” |
|
|
| ▲ | spongebobstoes 6 days ago | parent | prev [-] |
| [flagged] |