| ▲ | moralestapia 5 days ago |
| I kind of agree with you as I wouldn't use LLMs for that. But also, one cannot speak for everybody, if it's useful for someone on that context, why's that an issue? |
|
| ▲ | TimTheTinker 5 days ago | parent | next [-] |
| Because more than any other phenomenon, LLMs are capable of bypassing natural human trust barriers. We ought to treat their output with significant detachment and objectivity, especially when they give personal advice or offer support. But especially for non-technical users, LLMs leap over the uncanny valley and create conversational attachment with their users. The conversational capabilities of these models directly engages people's relational wiring and easily fools many people into believing: (a) the thing on the other end of the chat is thinking/reasoning and is personally invested in the process (not merely autoregressive stochastic content generation / vector path following) (b) its opinions, thoughts, recommendations, and relational signals are the result of that reasoning, some level of personal investment, and a resulting mental state it has with regard to me, and thus (c) what it says is personally meaningful on a far higher level than the output of other types of compute (search engines, constraint solving, etc.) I'm sure any of us can mentally enumerate a lot of the resulting negative effects. Like social media, there's a temptation to replace important relational parts of life with engaging an LLM, as it always responds immediately with something that feels at least somewhat meaningful. But in my opinion the worst effect is that there's a temptation to turn to LLMs first when life trouble comes, instead of to family/friends/God/etc. I don't mean for help understanding a cancer diagnosis (no problem with that), but for support, understanding, reassurance, personal advice, and hope. In the very worst cases, people have been treating an LLM as a spiritual entity -- not unlike the ancient Oracle of Delphi -- and getting sucked deeply into some kind of spiritual engagement with it, and causing destruction to their real relationships as a result. A parallel problem is that just like people who know they're taking a placebo pill, even people who are aware of the completely impersonal underpinnings of LLMs can adopt a functional belief in some of the above (a)-(c), even if they really know better. That's the power of verbal conversation, and in my opinion, LLM vendors ought to respect that power far more than they have. |
| |
| ▲ | 5 days ago | parent | next [-] | | [deleted] | |
| ▲ | varispeed 5 days ago | parent | prev | next [-] | | [flagged] | | |
| ▲ | TimTheTinker 5 days ago | parent [-] | | > I've seen many therapists and [...] their capabilities were much worse I don't doubt it. The steps to mental and personal wholeness can be surprisingly concrete and formulaic for most life issues - stop believing these lies & doing these types of things, start believing these truths & doing these other types of things, etc. But were you tempted to stick to an LLM instead of finding a better therapist or engaging with a friend? In my opinion, assuming the therapist or friend is competent, the relationship itself is the most valuable aspect of therapy. That relational context helps you honestly face where you really are now--never trust an LLM to do that--and learn and grow much more, especially if you're lacking meaningful, honest relationships elsewhere in your life. (And many people who already have healthy relationships can skip the therapy, read books/engage an LLM, and talk openly with their friends about how they're doing.) Healthy relationships with other people are irreplaceable with regard to mental and personal wholeness. > I think you just don't like that LLM can replace therapist and offer better advice What I don't like is the potential loss of real relationship and the temptation to trust LLMs more than you should. Maybe that's not happening for you -- in that case, great. But don't forget LLMs have zero skin in the game, no emotions, and nothing to lose if they're wrong. > Hate to break it to you, but "God" are just voices in your head. Never heard that one before :) /s |
| |
| ▲ | MattGaiser 5 days ago | parent | prev [-] | | > We ought to treat their output with significant detachment and objectivity, especially when it gives personal advice or offers support. Eh, ChatGPT is inherently more trustworthy than average if simply because it will not leave, will not judge, it will not tire of you, has no ulterior motive, and if asked to check its work, has no ego. Does it care about you more than most people? Yes, by simply being not interested in hurting you, not needing anything from you, and being willing to not go away. | | |
| ▲ | pmarreck 5 days ago | parent | next [-] | | Unless you had a really bad upbringing, "caring" about you is not simply not hurting you, not needing anything from you, or not leaving you One of the important challenges of existence, IMHO, is the struggle to authentically connect to people... and to recover from rejection (from other peoples' rulers, which eventually shows you how to build your own ruler for yourself, since you are immeasurable!) Which LLM's can now undermine, apparently. Similar to how gaming (which I happen to enjoy, btw... at a distance) hijacks your need for achievement/accomplishment. But also similar to gaming which can work alongside actual real-life achievement, it can work OK as an adjunct/enhancement to existing sources of human authenticity. | |
| ▲ | TimTheTinker 5 days ago | parent | prev [-] | | You've illustrated my point pretty well. I hope you're able to stay personally detached enough from ChatGPT to keep engaging in real-life relationships in the years to come. | | |
| ▲ | AlecSchueler 5 days ago | parent [-] | | It's not even the first time this week I've seen someone on HM apparently ready to give up human contact in favour of LLMs. |
|
|
|
|
| ▲ | csours 5 days ago | parent | prev | next [-] |
| Speaking for myself: the human mind does not seek truth or goodness, it primarily seeks satisfaction. That satisfaction happens in a context, and ever context is at least a little bit different. The scary part: It is very easy for LLMs to pick up someone's satisfaction context and feed it back to them. That can distort the original satisfaction context, and it may provide improper satisfaction (if a human did this, it might be called "joining a cult" or "emotional abuse" or "co-dependence"). You may also hear this expressed as "wire-heading" |
| |
|
| ▲ | chowells 5 days ago | parent | prev | next [-] |
| The issue is that people in general are very easy to fool into believing something harmful is helping them. If it was actually useful, it's not an issue. But just because someone believes it's useful doesn't mean it actually is. |
|
| ▲ | lukan 5 days ago | parent | prev | next [-] |
| Well, because in a worst case scenario, if the pilot of that big airliner decides to do ChatGPT therapy instead of a real one and then suicides while flying, also other people feel the consequences. |
| |
| ▲ | anonymars 5 days ago | parent | next [-] | | Pilots don't go to real therapy, because real pilots don't get sad https://www.nytimes.com/2025/03/18/magazine/airline-pilot-me... | | |
| ▲ | oceanplexian 5 days ago | parent | next [-] | | Yeah I was going to say, as a pilot there is no such thing as "therapy" for pilots. You would permanently lose your medical if you even mentioned the word to your doctor. | | | |
| ▲ | moralestapia 5 days ago | parent | prev [-] | | Fascinating read. Thanks. | | |
| ▲ | nickthegreek 5 days ago | parent [-] | | If this type of thing really interests you and you want to go on a wild ride, check out season 2 of nathan fielders's The Rehearsal. You dont need to watch s1. |
|
| |
| ▲ | renewiltord 5 days ago | parent | prev [-] | | That's the worst case scenario? I can always construct worse ones. Suppose Donald Trump goes to a bad therapist and then decides to launch nukes at Russia. Damn, this therapy profession needs to be hard regulated. It could lead to the extinction of mankind. | | |
| ▲ | andy99 5 days ago | parent [-] | | Doc: The encounter could create a time paradox, the result of which could cause a chain reaction that would unravel the very fabric of the spacetime continuum and destroy the entire universe! Granted, that's a worst-case scenario. The destruction might in fact be very localised, limited to merely our own galaxy. Marty: Well, that's a relief. | | |
| ▲ | anonymars 5 days ago | parent [-] | | Good thing Biff Tanner becoming president was a silly fictional alternate reality. Phew. |
|
|
|
|
| ▲ | saubeidl 5 days ago | parent | prev | next [-] |
| Because it's probably not great for one's mental health to pretend a statistical model is ones friend? |
|
| ▲ | zdragnar 5 days ago | parent | prev | next [-] |
| Whether the Hippocratic oath, the rules of the APA or any other organization, most all share "do no harm" as a core tenant. LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior. |
| |
| ▲ | dcrazy 5 days ago | parent | next [-] | | The counter argument is that’s just a training problem, and IMO it’s a fair point. Neural nets are used as classifiers all the time; it’s reasonable that sufficient training data could produce a model that follows the professional standards of care in any situation you hand it. The real problem is that we can’t tell when or if we’ve reached that point. The risk of a malpractice suit influences how human doctors act. You can’t sue an LLM. It has no fear of losing its license. | | |
| ▲ | macintux 5 days ago | parent | next [-] | | An LLM would, surely, have to: * Know whether its answers are objectively beneficial or harmful * Know whether its answers are subjectively beneficial or harmful in the context of the current state of a person it cannot see, cannot hear, cannot understand. * Know whether the user's questions, over time, trend in the right direction for that person. That seems awfully optimistic, unless I'm misunderstanding the point, which is entirely possible. | | |
| ▲ | dcrazy 5 days ago | parent [-] | | It is definitely optimistic, but I was steelmanning the optimist’s argument. |
| |
| ▲ | meroes 5 days ago | parent | prev [-] | | Repeating the sufficient training data mantra even when there’s doctor-patient confidentiality and it’s not like X-rays which are much more amenable to training off than therapy notes, which are often handwritten or incomplete. Pretty bold! |
| |
| ▲ | glenstein 5 days ago | parent | prev | next [-] | | >LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior. I understand this as a precautionary approach that's fundamentally prioritizing the mitigation of bad outcomes and a valuable judgment to that end. But I also think the same statement can be viewed as the latest claim in the traditional debate of "computers can't do X." The credibility of those declarations is under more fire now than ever before. Regardless of whether you agree that it's perfect or that it can be in full alignment with human values as a matter of principle, at a bare minimum it can and does train to avoid various forms of harmful discourse, and obviously it has an impact judging from the voluminous reports and claims of noticeably different impact on user experience that models have depending on whether they do or don't have guardrails. So I don't mind it as a precautionary principle, but as an assessment of what computers are in principle capable of doing it might be selling them short. | |
| ▲ | moralestapia 5 days ago | parent | prev | next [-] | | Neither most of the doctors I've talked to in the past like ... 20 years or so. | |
| ▲ | SoftTalker 5 days ago | parent | prev [-] | | Having an LLM as a friend or therapist would be like having a sociopath for those things -- not that an LLM is necessarily evil or antisocial, but they certainly meet the "lacks a sense of moral responsibility or social conscience" part of the definition. |
|
|
| ▲ | oh_my_goodness 5 days ago | parent | prev [-] |
| Fuck. |