| ▲ | pembrook 6 days ago |
| This is dumb. Nobody is writing articles about all the times the opposite happened, and ChatGPT helped prevent bad stuff. However, because of the nature of this topic, it’s the perfect target for NYT to generate moral panic for clicks. Classic media attention bait 101. I can’t believe HN is falling for this. It’s the equivalent of the moral panic around metal music in the 1980s where the media created a hysteria around the false idea there was hidden messages in the lyrics encouraging a teen to suicide. Millennials have officially become their parents. If this narrative generates enough media attention, what will probably happen is OpenAI will just make their next models refuse to discuss anything related to mental health at all. This is not a net good. |
|
| ▲ | ares623 6 days ago | parent | next [-] |
| I don’t get it. With all the evidence presented you think this situation is similar to mass hysteria? Yes, it rhymes with what you described. But this one has hard evidence. And you’re asking to ignore it because a similar thing happened in the past? |
| |
| ▲ | pembrook 6 days ago | parent [-] | | Yes, it’s clear there were zero other factors that led this teen to suicide. Teens have never committed suicide before ChatGPT. Also video games lead to school shootings, music leads to teens doing drugs, and pagers are responsible for teen pregnancies. Just look at all the evidence presented in this non-biased, objective article with no financial incentive to incite moral panic…obviously chatgpt is guilty of murder here! |
|
|
| ▲ | spacechild1 6 days ago | parent | prev | next [-] |
| ChatGPT is obviously not suited for therapy. No human therapist would ever say the things that ChatGPT did in this case. Someone posted the full complaint that contains many chat excerpts. These are horrifying! |
|
| ▲ | HPsquared 6 days ago | parent | prev | next [-] |
| Journalists and writers as a general class already have interests opposed to LLMs, and NYT in particular have an ongoing legal battle about copyright. Yes it's clearly dripping with bias. |
|
| ▲ | lowsong 6 days ago | parent | prev [-] |
| Do AI apologists, like you, live in some parallel universe? One where it's acceptable to call the suicide of a vulnerable teenager "media attention bait". You should be ashamed of yourself. |
| |
| ▲ | pembrook 6 days ago | parent [-] | | No the NYT should be ashamed of itself for using this tragic story to generate clicks and sell subscriptions/ads. They are quite literally profiting financially off of this teens death while spreading false moral panic at the same time, making us all dumber in the process. Do you also believe that video games and music led to columbine? The NYT got a lot of attention for suggesting that at the time as well. | | |
| ▲ | lowsong 6 days ago | parent [-] | | This is just sad. Assuming you’re not simply a troll or LLM, please introspect your behaviour. | | |
| ▲ | pembrook 5 days ago | parent [-] | | I'll take your ad hominem as an admission you cannot find falsehood in what I'm saying. | | |
| ▲ | lowsong 5 days ago | parent [-] | | Falsehood in... what exactly? This is a horrible tragedy, and you've taken the opportunity to complain about your perceived bias in a news agency instead of engaging with the actual topic. From one of your other comments in this thread: > Just look at all the evidence presented in this non-biased, objective article with no financial incentive to incite moral panic…obviously chatgpt is guilty of murder here! Who cares about the NYT, go read the original legal filings and tell me which part is the moral panic. Is it the bit where the chatbot offered to write him a draft of his suicide note, analyzed the structural stability of his noose and offered advice on a "beautiful suicide", or maybe the part where it told him to make sure to slowly lean forwards to build up the correct pressure to cause death from hanging? | | |
| ▲ | pembrook 5 days ago | parent [-] | | I think you missed the part where the kid ignored the repeated help messaging from chatgpt and twisted it into giving this information by lying about creating a fictional story. Also, these are just the most inflammatory excerpts selected by a lawyer trying to win a case. Without the full transcript, and zero context around this kids life in the real world, to claim ChatGPT is at fault here is just wild. At what point do you ascribe agency or any responsibility to the actual humans involved here (the 17 year old, his parents, his school, his community, etc.)? While tragic, blaming [new thing the kids are doing] is fundamentally stupid as it does nothing to address the real reasons this kid is dead now. In fact, it gives everyone an "out" where they don't have to face up to any uncomfortable realities. | | |
| ▲ | lowsong 5 days ago | parent [-] | | We can debate about how legally culpable OpenAI is for their products and if they did enough to ensure safeguards functioned, but if you can’t agree that “a machine that encourages and enables suicide is dangerous and morally wrong” without qualification, then there is nothing to discuss. There is no wider context that would make a product encouraging this behaviour acceptable. Deflecting blame onto the parents or the victim is extremely offensive, and I sincerely hope you don’t repeat these comments to people who have lost loved ones to suicide. | | |
| ▲ | Colony8409 4 days ago | parent [-] | | Blaming the chatbot and patting ourselves on the back when it's all said and done is a great way to guarantee that this same tragedy happens again, and again, and... It's a huge disservice to this child and the millions of other suffering children. |
|
|
|
|
|
|
|