| ▲ | Forgeties79 7 hours ago |
| I would be very careful doing this |
|
| ▲ | mettamage 11 minutes ago | parent | next [-] |
| In my experience, one also needs to be careful with actual therapists |
|
| ▲ | potatoskins 7 hours ago | parent | prev | next [-] |
| You always have to be careful with LLMs, but to be fair, I felt like Claude is such a good therapist, at least it is good to start with if you want to unpack yourself. I have been to 3 short human therapist sessions in my life, and I only felt some kind of genuine self-improvement and progress with Claude. |
| |
| ▲ | QuiDortDine 7 hours ago | parent [-] | | And how do you draw the line between feeling progress and actually making progress? | | |
| ▲ | moduspol 6 hours ago | parent | next [-] | | Counter-point: I often raise the same question of people with human therapists. I do not get strong responses. | |
| ▲ | layer8 6 hours ago | parent | prev [-] | | The same way you distinguish between feeling like having a problem and actually having a problem. | | |
| ▲ | Forgeties79 6 hours ago | parent [-] | | This is needlessly flippant and not really the same thing. Determining progress in a therapy setting is usually a collaborative effort between the therapist and the client. An LLM is not a reliable agent to make that determination. | | |
| ▲ | logifail 3 hours ago | parent | next [-] | | > Determining progress in a therapy setting is usually a collaborative effort between the therapist and the client. An LLM is not a reliable agent to make that determination Can anyone describe how to determine how a (professional, human) therapist is "a reliable agent" to make such a determination? | | |
| ▲ | Forgeties79 2 hours ago | parent [-] | | If you want to call into question the entire field of behavioral health and the training that is involved then that is fine, but if that’s how you feel then this entire discussion is really about something different and I can’t bridge the gap here. |
| |
| ▲ | layer8 6 hours ago | parent | prev [-] | | I didn’t claim that an LLM is that, and I fully agree that it is not. I’m saying that one is inherently one’s own judge of whether one has a problem. You go to a therapist when you feel you have a problem that warrants it. You stop going when you feel you don’t have it anymore. And OP is very likely assessing their progress in the same way. I wasn’t being flippant if the parent was asking a genuine question. | | |
| ▲ | Forgeties79 2 hours ago | parent [-] | | > I’m saying that one is inherently one’s own judge of whether one has a problem. You go to a therapist when you feel you have a problem that warrants it That is for certain types of therapy/clinical care. It is not always - and often isn’t - the case. Plenty of diagnoses and care protocols are not a matter of opinion or based on “you feeling there’s an issue” or deciding on your own there is no longer an issue. |
|
|
|
|
|
|
| ▲ | shimman 7 hours ago | parent | prev [-] |
| You can't be careful at all doing this, this is like smoking a cigarette in a dynamite factory. Using LLMs for therapy is so deeply dystopian and disgusting, people need human empathy for therapy. LLMs do not emit empathy. Complete disaster waiting to happen for that individual. |
| |
| ▲ | nuancebydefault 5 hours ago | parent | next [-] | | My experience is that it tries to look at your situation in an objective way, and tries to help you to analyse your thoughts and actions. It comes across as very empathetic though, so there can lie a danger if you are easily persuaded into seeing it as a friend. | | |
| ▲ | Forgeties79 2 hours ago | parent | next [-] | | >in an objective way One of the great myths of models in countless fields/industries. LLM’s are absolutely in no way objective. Now if you want to say it’s an “outside opinion“ that’s valid. But do not kid yourself into thinking it is somehow empirical or objective | |
| ▲ | worksonmine 5 hours ago | parent | prev [-] | | It doesn't try to do anything. It doesn't work like that. It regurgitates the most likely tokens found in the training set. | | |
| ▲ | nuancebydefault 4 hours ago | parent | next [-] | | Hmmmm i didn't know that... so a machine is not human is your point? Look, i know it doesn't try, just like a sorting algo does not try to sort, or an article does not try to convey an opinion and a law does not try to make society more organized. | |
| ▲ | cruffle_duffle 4 hours ago | parent | prev [-] | | That is so reductive of an analysis that it is almost worthless. Technically true, but very unhelpful in terms of using an LLM. It is a first principle though so it helps to “stir the context windows pot” by having it pull in research and other shit on the web that will help ground it and not just tell you exactly what you prompt it to say. | | |
| ▲ | worksonmine 3 hours ago | parent [-] | | They are amazing tools, but when people try to give them agency someone has to explain it in simple terms. |
|
|
| |
| ▲ | astrange 6 hours ago | parent | prev | next [-] | | Claudes have lots of empathy. The issue is the opposite - it isn't very good at challenging you and it's not capable of independently verifying you're not bullshitting it or lying about your own situation. But it's better than talking to yourself or an abuser! | | |
| ▲ | bloomca 6 hours ago | parent [-] | | It's about the same as talking to yourself, LLMs simply agree with anything you say unless it is directly harmful. Definitely agree about talking to an abuser, though. Sometimes people indeed just need validation and it helps them a lot, in that case LLMs can work. Alternatively, I assume some people just put the whole situation into words and that alone helps. But if someone needs something else, they can be straight up dangerous. | | |
| ▲ | astrange 5 hours ago | parent | next [-] | | > It's about the same as talking to yourself, LLMs simply agree with anything you say unless it is directly harmful. They have world knowledge and are capable of explaining things and doing web searches. That's enough to help. I mean, sometimes people just need answers to questions. | |
| ▲ | JoshTriplett 5 hours ago | parent | prev [-] | | > It's about the same as talking to yourself In one way it's potentially worse than talking to yourself. Some part of you might recognize that you need to talk to someone other than yourself; an LLM might make you feel like you've done that, while reinforcing whatever you think rather than breaking you out of patterns. Also, LLMs can have more resources and do some "creative" enabling of a person stuck in a loop, so if you are thinking dangerous things but lack the wherewithal to put them into action, an LLM could make you more dangerous (to yourself or to others). |
|
| |
| ▲ | DrewADesign 7 hours ago | parent | prev [-] | | Using an LLM for therapy is like using an iPad as an all-purpose child attention pacifier. Sure, it’s convenient. Sure there’s no immediate harm. Why a stressed parent would be attracted to the idea is obvious… and of course it’s a terrible idea. |
|