| ▲ | davidcbc 6 days ago |
| This is a clear example of why the people claiming that using a chatbot for therapy is better than no therapy are... I'll be extremely generous and say misguided. This kid wanted his parents to know he was thinking about this and the chatbot talked him out of it. |
|
| ▲ | MajimasEyepatch 6 days ago | parent | next [-] |
| Exactly right. It's totally plausible that someone could build a mental health chatbot that results in better outcomes than people who receive no support, but that's a hypothesis that can and should be tested and subject to strict ethical oversight. |
|
| ▲ | MBCook 6 days ago | parent | prev | next [-] |
| How many of these cases exist in the other direction? Where AI chatbots have actively harmed people’s mental health, including possible to the point of self destructive behaviors or self harm? A single positive outcome is not enough to judge the technology beneficial, let alone safe. |
| |
| ▲ | kayodelycaon 6 days ago | parent | next [-] | | It’s way more common than you think. I’m in a bubble of anti-AI people and we can see people we know going down that road. My family (different bubble) knows people. Every group of people I know knows somebody doing this. For context, my friends and family are in the northern Midwest. Average people, not early adopters of new technology. | |
| ▲ | j_timberlake 6 days ago | parent | prev | next [-] | | This is called the "Man bites dog" bias. The many people who don't commit suicide because an AI confidant helped them out are never ever gonna make the news. Meanwhile the opposite cases are "TODAY'S TOP HEADLINE" and that's what people discuss. | |
| ▲ | throwawaybob420 6 days ago | parent | prev [-] | | idk dude if your technology encourages a teenager to kill itself and prevents him from alerting his parents via a cry for help, I don’t care how “beneficial” it is. | | |
| ▲ | threatofrain 6 days ago | parent | next [-] | | Although I don't believe current technology is ready for talk therapy, I'd say that anti-depressants can also cause suicidal thoughts and feelings. Judging the efficacy of medical technology can't be done on this kind of moral absolutism. | | |
| ▲ | podgietaru 6 days ago | parent | next [-] | | The suicidal ideation of Antidepressants is a well communicated side effect. Antidepressants are prescribed by trained medical professionals who will tell you, encourage you to tell them if these side effects occur, and will encourage you to stop the medication if it does occur. It's almost as if we've built systems around this stuff for a reason. | | |
| ▲ | Denatonium 6 days ago | parent [-] | | In practice, they'll just prescribe a higher dose when that happens, thus worsening the problem. I'm not defending the use of AI chatbots, but you'd be hard-pressed to come up with a worse solution for depression than the medical system. | | |
| ▲ | podgietaru 6 days ago | parent [-] | | Not my experience at all. The Psych that prescribed me antidepressants was _incredibly_ diligent. Including with side effects that affected my day to day life like loss of Libido. We spent a long time finding something, but when we did it worked exceptionally well. We absolutely did not just increase the dose. And I'm almost certain the literature for this would NOT recommend an increase of dosage if the side effect was increased suicidality. The demonisation of medication needs to stop. It is an important tool in the toolbelt for depression. It is not the end of the journey, but it makes that journey much easier to walk. | | |
| ▲ | cameronh90 6 days ago | parent | next [-] | | I'm a happy sertraline user, but your experience sounds like the exception. Most people are prescribed antidepressants by their GP/PCP after a short consultation. In my case, I went to the doctor, said I was having problems with panic attacks, they asked a few things to make sure it was unlikely to be physical and then said to try sertraline. I said OK. In and out in about 5 minutes, and I've been on it for 3 years now without a follow up with a human. Every six months I do have to fill in an online questionnaire when getting a new prescription which asks if I've had any negative side effects. I've never seen a psychiatrist or psychologist in my life. From discussions with friends and other acquaintances, this is a pretty typical experience. P.S. This isn't in any way meant to be critical. Sertraline turned my life around. | | |
| ▲ | podgietaru 6 days ago | parent [-] | | This is probably fair - My experience comes both from the UK (where it was admittedly worse, but not that much) and the Netherlands - where it was fantastic. Even in the worst experiences, I had a followup appointment in 2, 4 and 6 weeks to check the medication. | | |
| ▲ | cameronh90 6 days ago | parent [-] | | My experience is in the UK, but it doesn't surprise me that you got more attention in the Netherlands. From the experiences of my family, if you want anything more than a paracetamol, you practically need sign off from the Minister of Health! Joking aside, they do seem to escalate more to specialists whereas we do more at the GP level. |
|
| |
| ▲ | npteljes 6 days ago | parent | prev [-] | | Unfortunately that's just a single good experience. (Unfortunately overall, not for you! I'm happy that your experience was so good.) Psych drugs (and many other drugs) are regularly overprescribed. Here is just one documented example: https://pmc.ncbi.nlm.nih.gov/articles/PMC6731049/ Opioids in the US are probably the most famous case though: https://en.wikipedia.org/wiki/Opioid_epidemic |
|
|
| |
| ▲ | AIPedant 6 days ago | parent | prev | next [-] | | I think it's fine to be "morally absolutist" when it's non-medical technology, developed with zero input from federal regulators, yet being misused and misleadingly marketed for medical purposes. | |
| ▲ | kelnos 6 days ago | parent | prev | next [-] | | That's a bit of an apples-to-oranges comparison. Anti-depressants are medical technology, ChatGPT is not. Anti-depressants are administered after a medical diagnosis, and use and effects are monitored by a doctor. This doesn't always work perfectly, of course, but there are accepted, regulated ways to use these things. ChatGPT is... none of that. | |
| ▲ | rsynnott 6 days ago | parent | prev | next [-] | | And that is one reason that use of anti-depressants is (supposed to be) medically supervised. | |
| ▲ | mvdtnz 6 days ago | parent | prev [-] | | Didn't take long for the whatabouters to arrive. |
| |
| ▲ | npteljes 6 days ago | parent | prev | next [-] | | You might not care personally, but this isn't how we do anything, because we wouldn't have anything in the world at all. Different things harm and kill people all the time, and many of them have barely any use than harmful activity, yet they are still active parts of our lives. I understand the emotional impact of what happened in this case, but there is not much to discuss if we just reject everything outright. | |
| ▲ | MBCook 6 days ago | parent | prev [-] | | I agree. If there was one death for 1 million saves, maybe. Instead, this just came up in my feed: https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-t... | | |
| ▲ | rideontime 6 days ago | parent | next [-] | | This is the same case that is being discussed, and your comment up-thread does not demonstrate awareness that you are, in fact, agreeing with the parent comment that you replied to. I get the impression that you read only the headline, not the article, and assumed it was a story about someone using ChatGPT for therapy and gaining a positive outcome. | | |
| ▲ | MBCook 6 days ago | parent [-] | | I did! Because I can’t see past the paywall. I can’t even read the first paragraph. So the headline is the only context I have. | | |
| ▲ | rideontime 6 days ago | parent | next [-] | | I would advise you to gather more context before commenting in the future. | |
| ▲ | latexr 6 days ago | parent | prev [-] | | A link to bypass the paywall has been posted several hours before your comment, and currently sits at the top. https://news.ycombinator.com/item?id=45027043 I recommend you get in the habit of searching for those. They are often posted, guaranteed on popular stories. Commenting without context does not make for good discussion. |
|
| |
| ▲ | mvdtnz 6 days ago | parent | prev [-] | | What on Earth? You're posting an article about the same thing we're already discussing. If you want to contribute to the conversation you owe it to the people who are taking time out of their day to engage with you to read the material under discussion. |
|
|
|
|
| ▲ | UltraSane 6 days ago | parent | prev | next [-] |
| I don't know if it counts as therapy or not but I find the ability to have intelligent (seeming?) conversations with Claude about the most incredibly obscure topics to be very pleasant. |
| |
| ▲ | hattmall 6 days ago | parent | next [-] | | But do you really feel you are conversing? I could never get that feeling. It's not a conversation to me it's just like an on-demand book that might be wrong. Not saying I don't use them to attempt to get information, but it certainly doesn't have a feeling than doing anything other than getting information out of a computer. | | |
| ▲ | UltraSane 6 days ago | parent [-] | | "But do you really feel you are conversing?" Yes. For topics with lots of training data like physics Claude is VERY human sounding. I've had very interesting conversations with Claude Opus about the Boltzmann brain issue and how I feel that the conventional wisdom ignores the low probability of a BBrain having a spatially and temporally consistent set of memories and how the fact that brains existing in a universe that automatically creates consistent memories means the probability of us being Boltzmann brains is very low. Since even if a Boltzmann brain pops into existence its memory will be most likely completely random and completely insane/insensate. There aren't a lot of people who want to talk about Boltzmann brains. | | |
| ▲ | furyofantares 6 days ago | parent [-] | | It sounds like you're mostly just talking to yourself. Which is fine, but being confused about that is where people get into trouble. | | |
| ▲ | UltraSane 6 days ago | parent [-] | | "It sounds like you're mostly just talking to yourself" No, Claude does know a LOT more than I do about most things and does push back on a lot of things. Sometimes I am able to improve my reasoning and other times I realize I was wrong. Trust me, I am aware of the linear algebra behind the curtain! But even when you mostly understand how they work the best LLMs today are very impressive. And latent spaces fundamentally new way to index data. | | |
| ▲ | furyofantares 6 days ago | parent | next [-] | | You can talk to yourself while reading books and searching the web for information. I don't think the fact that you're learning from information the LLM is pulling in means you're really conversing with it. I do find LLMs very useful and am extremely impressed by them, I'm not saying you can't learn things this way at all. But there's nobody else on the line with you. And while they will emit text which contradicts what you say if it's wrong enough, they've been heavily trained to match where you're steering things, even if you're trying to avoid doing any steering. You can mostly understand how these work and still end up in a feedback loop that you don't realize is a feedback loop. I think this might even be more likely the more the thing has to offer you in terms of learning - the less qualified you are on the subject, the less you can tell when it's subtly yes-and'ing you. | | |
| ▲ | elliotto 6 days ago | parent [-] | | I think the nature of a conversational interface that responds to natural language questions is fundamentally different to the idea that you talk to yourself while reading information sources. I'm not sure it's useful to dismiss the idea that we can talk with a machine. The current generation of LLMs have had their controversies, but these are still pre alpha products, and I suspect in the future we will look back on releasing them unleashed as a mistake. There's no reason the mistakes they make today can't be improved upon. If your experiences with learning from a machine are similar to mine, then we can both see a whole new world coming that's going to take advantage of this interface. |
| |
| ▲ | ceejayoz 6 days ago | parent | prev [-] | | > No, Claude does know a LOT more than I do about most things… Plenty of people can confidently act like they know a lot without really having that knowledge. | | |
| ▲ | UltraSane 6 days ago | parent [-] | | So you are denying that LLMs actually contain real knowledge? | | |
| ▲ | habinero 6 days ago | parent [-] | | They contain training data and a statistical model that might generate something true or it might generate garbage, both with equal confidence. You need to already know the answer to determine which is which. | | |
| ▲ | UltraSane 6 days ago | parent [-] | | Have you actually used Claude Opus 4.1? It is right far more than it is wrong. | | |
| ▲ | habinero 5 days ago | parent [-] | | How could you know? | | |
| ▲ | UltraSane 5 days ago | parent | next [-] | | How do you react to comments like this? https://news.ycombinator.com/item?id=44980896#44980913 I believe it absolutely should be, and it can even be applied to rare disease diagnosis. My child was just saved by AI. He suffered from persistent seizures, and after visiting three hospitals, none were able to provide an accurate diagnosis. Only when I uploaded all of his medical records to an AI system did it immediately suggest a high suspicion of MOGAD-FLAMES — a condition with an epidemiology of roughly one in ten million. Subsequent testing confirmed the diagnosis, and with the right treatment, my child recovered rapidly. For rare diseases, it is impossible to expect every physician to master all the details. But AI excels at this. I believe this may even be the first domain where both doctors and AI can jointly agree that deployment is ready to begin. | |
| ▲ | UltraSane 5 days ago | parent | prev [-] | | How can you? | | |
| ▲ | habinero 5 days ago | parent [-] | | Because I don't rely on a glorified text generator for what's true lol |
|
|
|
|
|
|
|
|
|
| |
| ▲ | AIPedant 6 days ago | parent | prev | next [-] | | Therapy isn't about being pleasant, it's about healing and strengthening and it's supposed to be somewhat unpleasant. Colin Fraser had a good tweet about this: https://xcancel.com/colin_fraser/status/1956414662087733498#... In a therapy session, you're actually going to do most of the talking. It's hard. Your friend is going to want to talk about their own stuff half the time and you have to listen. With an LLM, it's happy to do 99% of the talking, and 100% of it is about you.
| |
| ▲ | _petronius 6 days ago | parent | prev [-] | | It does not count as therapy, no. Therapy (if it is any good) is a clinical practice with actual objectives, not pleasant chit-chat. |
|
|
| ▲ | npteljes 6 days ago | parent | prev | next [-] |
| Yeah, I was one such person, but I might give up on this ultimately. If I will, I will do so for CYA reasons, not because I think it's a bad thing overall. In this current case, the outcome is horrible, and the answers that ChatGPT provided were inexcusable. But looking at a bigger picture, how much of a better chance does a person have by everyone telling them to "go to therapy" or to "talk to others" and such? What others? Searching "online therapy", BetterHelp is the second result. BetterHelp doesn't exactly have a good reputation online, but still, their influence is widespread. Licensed therapists can also be bad actors. There is no general "good thing" that is tried and true for every particular case of human mental health, but even letting that go, the position is abused just as any other authority / power position is, with many bad therapists out there. Not to mention the other people who pose as (mental) health experts, life coaches, and such. Or the people who recruit for a cult. Frankly, even in the face of this horrible event, I'm not convinced that AI in general fares that much lower than the sum of the people who offer a recipe for a better life, skills, company, camaraderie. Rather, I feel like that AI is in a situation like the self-driving cars are, where we expect the new thing to be 110%, even though we know that the old thing is far for perfect. I do think that OpenAI is liable though, and rightfully so. Their service has a lot of power to influence, clearly outlined in the tragedy that is shown in the article. And so, they also have a lot of responsibility to reign that in. If they were a forum where the teen was pushed to suicide, police could go after the forum participants, moderators, admins. But in case of OpenAI, there is no such person, the service itself is the thing. So the one liable must be the company that provides the service. |
|
| ▲ | staticman2 6 days ago | parent | prev [-] |
| There's no indication the kid asked ChatGPT to act as a therapist. Unless people are claiming any prompt is better than no therapy I don't think your framing is fair. |