Remix.run Logo
bachmeier 5 hours ago

I agree with your conclusion, but that's by design. The goal is not to tell people the truth (how would they even do that). The goal is to give the answer that would have come from the training data if that question were asked. And the reality is that confirmation is part of life. You may even struggle to stay married if you don't learn to confirm your wife's perspectives.

delusional 3 hours ago | parent | next [-]

> The goal is to give the answer that would have come from the training data if that question were asked.

Or more cynically, the goal is to give you the answer that makes you use the product more. Finetuning is really diverging the model from whats in the training set and towards what users "prefer".

wat10000 2 hours ago | parent | prev | next [-]

The loss function is based on predicting the response based on the training data, or based on subsequent RLHF. The goal is usually to make money. Not only does the training data contain a lot of "you're absolutely right" nonsense, but that goal tends to push more of it in the RLHF step.

kakacik 3 hours ago | parent | prev | next [-]

> You may even struggle to stay married if you don't learn to confirm your wife's perspectives.

I don't dispute that but man that is some shitty marriage. Even rather submissive guys are not happy in such setup, not at all. Remember its supposed to be for life or divorce/breakup, nothing in between.

Lifelong situation like that... why folks don't do more due diligence on most important aspect of long term relationships - personality match? Its usually not a rocket science, observe behavior in conflicts, don't desperately appease in situations where one is clearly not to blame. Masks fall off quickly in heated situations, when people are tired and so on. Its not perfect but pretty damn good and covers >95% of the scenarios.

alterom 3 hours ago | parent | prev | next [-]

>And the reality is that confirmation is part of life.

Sycophantic agreement certainly is, as is lying, manipulation, abuse, gaslighting.

Those aren't the good parts of life.

Those aren't the parts I want the machine to do to people on a mass scale.

>You may even struggle to stay married if you don't learn to confirm your wife's perspectives.

Sorry what?

The important part is validating the way someone feels, not "confirming perspectives".

A feeling or a perspective can be valid ("I see where you're coming from, and it's entirely reasonable to feel that way"), even when the conclusion is incorrect ("however, here are the facts: ___. You might think ___ because ____, and that's reasonable. Still, this is how it is.")

You're doing nobody a favor by affirming they are correct in believing things that are verifiably, factually false.

There's a word for that.

It's lying.

When you're deliberately lying to keep someone in a relationship, that's manipulation.

When you're lying to affirm someone's false views, distorting their perception of reality - particularly when they have doubts, and you are affirming a falsehood, with intent to control their behavior (e.g. make them stay in a relationship when they'd otherwise leave) -

... - that, my friend, is gaslighting.

This is exactly what the machine was doing to the colleague who asked "which of us is right, me or the colleague that disagrees with me".

It doesn't provide any useful information, it reaffirms a falsehood, it distorts someone's reality and destroys trust in others, it destroys relationships with others, and encourages addiction — because it maximizes "engagement".

I.e., prevents someone from leaving.

That's abuse.

That, too is a part of life.

>I agree with your conclusion, but that's by design

All I did was named the phenomena we're talking about (lying, gaslighting, manipulation, abuse).

Anyone can verify the correctness of the labeling in this context.

I agree with your assertion, as well as that of the parent comment. And putting them together we have this:

LLM chatbots today are abusive by design.

This shit needs to be regulated, that's all. FDA and CPSC should get involved.

zzzeek 2 hours ago | parent | prev [-]

All this, and yet, people are so angered by the term "stochastic parrot".

I use LLMs every day, I use Claude, Gemini, they're great. But they are very elaborate autocomplete engines. I'm not really shaking off that impression of them despite daily use .

wat10000 2 hours ago | parent [-]

It's weird. It's literally what they are. It's a gigantic mathematical function that takes input and assigns probabilities to tokens.

Maybe they can also be smart. I'm skeptical that the current LLM approach can lead to human-level intelligence, but I'm not ruling it out. If it did, then you'd have human-level intelligence in a very elaborate autocomplete. The two things aren't mutually exclusive.