Remix.run Logo
dimgl 6 hours ago

Even as someone who (wrongly) believed that I had high emotional intelligence, I too was bit by this. Almost a year ago when LLMs were starting to become more ubiquitous and powerful I discussed a big life/professional decision with an LLM over the course of many months. I took its recommendation. Ultimately it turned out to be the wrong decision.

Thankfully it was recoverable, but it really sobered me up on LLMs. The fault is on me, to be clear, as LLMs are just a tool. The issue is that lots of LLMs try to come across as interpersonal and friendly, which lulls users into a false sense of security. So I don't know what my trajectory would have been if I were a teenager with these powerful tools.

I do think that the LLMs have gotten much better at this, especially Claude, and will often push back on bad choices. But my opinion of LLMs has forever changed. I wonder how many other terrible choices people have made because these tools convinced them to make a bad decision.

whodidntante 5 hours ago | parent | next [-]

I think that if you go to an AI for advice and emotional support, it will do what most people will do - tell you what it thinks you want to hear. I am not surprised about this at all, and I do notice that when you veer into these areas, it can do it in a surprisingly subtle and dangerous way.

I try to focus on results. Things like an app that does what you want, data and reports that you need, or technical things like setting up a server, setting up a database, building a website, etc.

I have also found it useful for feedback and advice, but only once I have had it generate data that I can verify. For example, financial analysis or modelling, health advice (again factual based), tax modelling, etc, but again, all based on verifiable data/tables/charts.

I am very surprised on what Claude is capable of, across the entire tech stack: code, sysadmin, system integration, security. I find it scary. Not just speed, but also quality and the mental load is a difference of kind not quantity.

Personal advice on life decisions/relationships ? No way I would go there.

It is also good for me to know that the tools I have built, the data I have gathered, and my thinking approach places me as one of the most intelligent developers and analysts in the world.

cruffle_duffle 3 hours ago | parent | next [-]

That is why you have to always have it ground itself in something. Have it search for relevant research or professional whatever and pull that into context. Otherwise it’s just your word plus its training data.

I had to deal with a close family friend going through alcohol withdrawal and getting checked in at a recovery clinic for detox and used Claude heavily. The first thing I had it do as do that “deep research” around the topic of alcohol addiction, withdrawal, etc… and then made that a project document along with clear guidelines about how it shouldn’t make inferences beyond what it in its context and supporting docs. We also spent a whole session crafting a good set of instructions (making sure it was using Anthropics own guidelines for its model…)

Little differences in prompts make a huge deal in the output.

I dunno. It is possible to use these models for dumping crazy shit you are going through. But don’t kid yourself about their output and aggressively find ways to stomp out things it has no real way to authoritatively say.

stephbook 5 hours ago | parent | prev [-]

Nice joke, hadn't seen it coming

KellyCriterion 4 hours ago | parent [-]

Sounds like AI-written, eh? :-D

(esp last sentence?)

notracks 5 hours ago | parent | prev | next [-]

I recently found out that Claude's latest model, Sonnet 4.6, scores the highest in Bullsh*tBench[0] (Funny name - I know). It's a recent benchmark that measures whether an LLM refuses nonsense or pushes back on bad choices so Claude has definitely gotten better.

[0] - https://petergpt.github.io/bullshit-benchmark/viewer/index.v...

astrange 5 hours ago | parent | next [-]

I haven't tried talking to Sonnet much, but Opus 4.6 is very sycophantic. Not in the sense of explicitly always agreeing with you, but its answers strictly conform to the worldview in your questions and don't go outside it or disagree with it.

It _does_ love to explicitly agree with anything it finds in web search though.

(Anthropic tries to fight this by adding a hidden prompt that makes it disagree with you and tell you to go to bed, which doesn't help.)

layer8 5 hours ago | parent | prev | next [-]

You don’t have to star out things like that on HN.

uniq7 3 hours ago | parent | prev | next [-]

Good call on censoring yourself preemptively, otherwise HN could demonetize your comment

akurilin 4 hours ago | parent | prev [-]

Great link, thanks for sharing. Confirmed what I saw empirically by comparing the different models during daily use.

NortySpock 5 hours ago | parent | prev | next [-]

One mental model I have with LLMs is that they have been the subject of extreme evolutionary selection forces that are entirely the result of human preferences.

Any LLM not sufficiently likable and helpful in the first two minutes was deleted or not further iterated on, or had so much retraining (sorry, "backpropagation") it's not the same as it started out.

So it's going to say whatever it "thinks" you want it to say, because that's how it was "raised".

user_7832 3 hours ago | parent [-]

Fully agree. I wonder in the long term how this will show up. Will every business/CEO do more of what he/they anyway want to do, but now supported by AI/LLMs?

The possibilities in "dangerous" fields are a bit more frightening. A general is much more likely to ask ChatGPT "Do you think this war is a good idea/should I drop a bomb", rather than an actually helpful tool - where you might ask "What are 5 hidden points on favor of/against bombing that one likely has missed".

The more you use AI as a strict tool that can be wrong, the safer. Unfortunately I'm not sure if that helps if the guy bombing your city (or even your president) is using AI poorly, and their decisions affect you.

tavavex 2 hours ago | parent [-]

> Will every business/CEO do more of what he/they anyway want to do, but now supported by AI/LLMs?

Arguably, it already worked that way. The best way to climb the ranks of a 'dictatorial' organization (a repressive government or an average large business) is to always say yes. Adopt what the people from up above want you to use, say and think. Don't question anything. Find silver linings in their most deranged ideas to show your loyalty. The rich and powerful that occupy the top ranks of these structures often hate being challenged, even if it's irrational for their well-being. Whenever you see a country or a company making a massive mistake, you can often trace it to a consequence of this. Humans hate being challenged and the rich can insulate themselves even further from the real world.

What's worrying me is the opposite - that this power is more available now. Instead of requiring a team of people and an asset cushion that lets you act irrationally, now you just need to have a phone in your pocket. People get addicted to LLMs because they can provide endless, varied validation for just about anything. Even if someone is aware of their own biases, it's not a given that they'll always counteract the validation.

layla5alive 6 hours ago | parent | prev | next [-]

Any more context you're willing to share?

xXSLAYERXx 3 hours ago | parent [-]

We really do love dirty laundry don't we? I'm sure whatever the context is, it is deeply personal. Do you also have your popcorn ready?

dimgl 2 hours ago | parent [-]

Thank you. Yes, I'm going to refrain from airing out my dirty laundry. I made a bad decision, now I'm living with it, and more context doesn't actually change the intent behind my message: these tools are dangerous. Getting better, but still dangerous.

qsera 5 hours ago | parent | prev | next [-]

If you use LLMs in a way that the underlying assumption is that it is capable of "thinking" or "caring" then you are going to get burned pretty bad. Because it is an illusion and illusions disappear when they have to bear real weight of reality.

But sadly LLMs push all the right buttons that lead humans into that kind of behavior. And the marketing around LLMs works overtime to reinforce that behavior.

But instead if you ignore all that and use LLMs as a search tool, then you will get positive returns from using it.

matwood 4 hours ago | parent | prev | next [-]

> I took its recommendation. Ultimately it turned out to be the wrong decision.

Curious if you think a single person would have helped you make a better decision? Not everything works out. If a friend helped me make a decision I certainly wouldn’t blame them later if it didn’t work out. It’s ultimately my call.

paulhebert 3 hours ago | parent [-]

If a friend gave me bad advice about a major life decision I would stop consulting them for future life decisions

davyAdewoyin 6 hours ago | parent | prev | next [-]

I largely agree, I also thought I was smart enough not to be deluded into a false sense of security, but interacting with an LLM is so tricky and slippery that, more often than not you are forced to believe you just solve a problem no one had solve in a hundred years.

My guideline now for interacting with LLM is only to believe the result if it is factual and easily testable, or if I'm a domain expert. Anything else especially if I'm in complete ignorance about the subject is to approach with a high degree of suspicion that I can be led astray by its sycophancy.

nuancebydefault 4 hours ago | parent | prev | next [-]

Weird, i am using copilot and it steers me mostly towards self reflection and tries to look at things objectively. It is very friendly and comes across as empathetic, to not hurt your feelings, that is probably baked in to keep the conversation going...

lovecg 6 hours ago | parent | prev | next [-]

Let’s just hope that the people in charge of the really important decisions that affect us all approach LLM generated advice with the same wisdom.

saghm 5 hours ago | parent [-]

They don't: https://fortune.com/2026/03/17/krafton-subnautica-chatgpt-de...

paulhebert 3 hours ago | parent [-]

Thanks for sharing this. Subnautica is one of my favorite games so I was very excited for the sequel and very frustrated by this move by Krafton.

It’s even more maddening that this greedy maneuver was orchestrated based on LLM advice.

I’m glad the subnautica team won the lawsuit. Maybe I can play it now wothout feeling guilty

jt2190 5 hours ago | parent | prev | next [-]

I’m struggling to understand how the advice coming from an LLM is any more or less “good” than advice coming from a human. Or is this less about the “advice” part of LLMs and more about the “personable” part, i.e. you felt more at ease seeking and trusting this kind of advice form an LLM?

nuancebydefault 4 hours ago | parent [-]

It is much easier to share personal feelings with an llm, i found. Also it tried to keep me happy to get the conversation going, but for me it feels mostly 'objective' or the most socially acceptable advice, e. g. keeping a good relationship is more important than trying a new one with someone else because you 'feel something' around them. For me it tried to find out together the sources or causes of that feeling, e.g. you recognize parts of yourself in someone else or in the past you had very good or very bad experiences around an encounter.

jt2190 2 hours ago | parent [-]

Interesting thanks for elaborating.

potatoskins 6 hours ago | parent | prev | next [-]

Yeah, I think Claude is a lot more logical in that sense, I use it for some therapy sessions myself and it pushes back a bit more than Open AI and Gemini

borski 5 hours ago | parent | next [-]

https://news.ycombinator.com/item?id=47395779

Forgeties79 6 hours ago | parent | prev | next [-]

I would be very careful doing this

potatoskins 5 hours ago | parent | next [-]

You always have to be careful with LLMs, but to be fair, I felt like Claude is such a good therapist, at least it is good to start with if you want to unpack yourself. I have been to 3 short human therapist sessions in my life, and I only felt some kind of genuine self-improvement and progress with Claude.

QuiDortDine 5 hours ago | parent [-]

And how do you draw the line between feeling progress and actually making progress?

moduspol 5 hours ago | parent | next [-]

Counter-point: I often raise the same question of people with human therapists. I do not get strong responses.

layer8 5 hours ago | parent | prev [-]

The same way you distinguish between feeling like having a problem and actually having a problem.

Forgeties79 5 hours ago | parent [-]

This is needlessly flippant and not really the same thing. Determining progress in a therapy setting is usually a collaborative effort between the therapist and the client. An LLM is not a reliable agent to make that determination.

logifail an hour ago | parent | next [-]

> Determining progress in a therapy setting is usually a collaborative effort between the therapist and the client. An LLM is not a reliable agent to make that determination

Can anyone describe how to determine how a (professional, human) therapist is "a reliable agent" to make such a determination?

Forgeties79 22 minutes ago | parent [-]

If you want to call into question the entire field of behavioral health and the training that is involved then that is fine, but if that’s how you feel then this entire discussion is really about something different and I can’t bridge the gap here.

layer8 5 hours ago | parent | prev [-]

I didn’t claim that an LLM is that, and I fully agree that it is not. I’m saying that one is inherently one’s own judge of whether one has a problem. You go to a therapist when you feel you have a problem that warrants it. You stop going when you feel you don’t have it anymore. And OP is very likely assessing their progress in the same way. I wasn’t being flippant if the parent was asking a genuine question.

Forgeties79 21 minutes ago | parent [-]

> I’m saying that one is inherently one’s own judge of whether one has a problem. You go to a therapist when you feel you have a problem that warrants it

That is for certain types of therapy/clinical care. It is not always - and often isn’t - the case. Plenty of diagnoses and care protocols are not a matter of opinion or based on “you feeling there’s an issue” or deciding on your own there is no longer an issue.

shimman 5 hours ago | parent | prev [-]

You can't be careful at all doing this, this is like smoking a cigarette in a dynamite factory.

Using LLMs for therapy is so deeply dystopian and disgusting, people need human empathy for therapy. LLMs do not emit empathy.

Complete disaster waiting to happen for that individual.

nuancebydefault 4 hours ago | parent | next [-]

My experience is that it tries to look at your situation in an objective way, and tries to help you to analyse your thoughts and actions. It comes across as very empathetic though, so there can lie a danger if you are easily persuaded into seeing it as a friend.

worksonmine 3 hours ago | parent [-]

It doesn't try to do anything. It doesn't work like that. It regurgitates the most likely tokens found in the training set.

nuancebydefault 2 hours ago | parent | next [-]

Hmmmm i didn't know that... so a machine is not human is your point? Look, i know it doesn't try, just like a sorting algo does not try to sort, or an article does not try to convey an opinion and a law does not try to make society more organized.

cruffle_duffle 2 hours ago | parent | prev [-]

That is so reductive of an analysis that it is almost worthless. Technically true, but very unhelpful in terms of using an LLM.

It is a first principle though so it helps to “stir the context windows pot” by having it pull in research and other shit on the web that will help ground it and not just tell you exactly what you prompt it to say.

worksonmine an hour ago | parent [-]

They are amazing tools, but when people try to give them agency someone has to explain it in simple terms.

astrange 4 hours ago | parent | prev | next [-]

Claudes have lots of empathy. The issue is the opposite - it isn't very good at challenging you and it's not capable of independently verifying you're not bullshitting it or lying about your own situation.

But it's better than talking to yourself or an abuser!

bloomca 4 hours ago | parent [-]

It's about the same as talking to yourself, LLMs simply agree with anything you say unless it is directly harmful. Definitely agree about talking to an abuser, though.

Sometimes people indeed just need validation and it helps them a lot, in that case LLMs can work. Alternatively, I assume some people just put the whole situation into words and that alone helps.

But if someone needs something else, they can be straight up dangerous.

astrange 4 hours ago | parent | next [-]

> It's about the same as talking to yourself, LLMs simply agree with anything you say unless it is directly harmful.

They have world knowledge and are capable of explaining things and doing web searches. That's enough to help. I mean, sometimes people just need answers to questions.

JoshTriplett 3 hours ago | parent | prev [-]

> It's about the same as talking to yourself

In one way it's potentially worse than talking to yourself. Some part of you might recognize that you need to talk to someone other than yourself; an LLM might make you feel like you've done that, while reinforcing whatever you think rather than breaking you out of patterns.

Also, LLMs can have more resources and do some "creative" enabling of a person stuck in a loop, so if you are thinking dangerous things but lack the wherewithal to put them into action, an LLM could make you more dangerous (to yourself or to others).

DrewADesign 5 hours ago | parent | prev [-]

Using an LLM for therapy is like using an iPad as an all-purpose child attention pacifier. Sure, it’s convenient. Sure there’s no immediate harm. Why a stressed parent would be attracted to the idea is obvious… and of course it’s a terrible idea.

kortilla 5 hours ago | parent | prev [-]

Don’t call them therapy sessions. They kind of look like it but ultimately these are smoke blowing machines, which is very far from what a therapist would do.

saghm 5 hours ago | parent [-]

Six decades later and we're still trying to explain to people the same things[1]:

> Some of ELIZA's responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer. Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."

[1]: https://en.wikipedia.org/wiki/ELIZA

zpeti 4 hours ago | parent | prev | next [-]

I also used it for advice on a massive personal decision, but I specifically asked it to debate with me and persuade me of the other side. I specifically prompted it for things I am not thinking about, or ways I could be wrong.

It was extremely good at the other side too. You just have to ask. I can imagine most people don't try this, but LLMs literally just do what you ask them to. And they're extremely good and weighing both sides if that's what you specifically want.

So who's fault is it if you only ask for one side, or if the LLM is too sycophantic? I'm not sure it's the LLMs fault actually.

colechristensen 6 hours ago | parent | prev [-]

>"'And it is also said,' answered Frodo: 'Go not to the Elves for counsel, for they will say both no and yes.'

>"'Is it indeed?' laughed Gildor. 'Elves seldom give unguarded advice, for advice is a dangerous gift, even from the wise to the wise, and all courses may run ill...'"

This is the only way you should solicit personal advice from an LLM.