Remix.run Logo
root_axis 3 days ago

> People aren't much different

Yes they are. There is absolutely zero evidence that friendlier humans are more prone to mistakes or conspiracy theories.

However, even if that were true, LLMs are not humans, anthropomorphizing them is not a helpful way to think about them.

cjbgkagh 3 days ago | parent | next [-]

Would be better to think of it as ‘agreeableness’ and agreeable people are more likely to shift their views to agree with those they are talking to.

js8 3 days ago | parent | next [-]

I would call it obedience, and it's not the same as friendliness.

The difference, in a repeated prisoner dilemma: Friendliness is cooperating on the first move, and then conditionally. Obedience is always cooperating.

cjbgkagh 3 days ago | parent [-]

Agreeableness is a Big Five personality trait so a lot of the formal research into personalities uses it as one of the dimensions.

js8 3 days ago | parent [-]

Yeah but I would argue it's different from both friendliness and obedience.

cjbgkagh 3 days ago | parent [-]

Do you have a standard and a body of work you can point to in an effort to aid with communication these thoughts to others? At the very least there should be a reversible projection to the Big 5 standard.

js8 2 days ago | parent [-]

I don't think Big5 applies to LLMs. They don't share people's morality or common sense, and the traits are predicated on that.

BTW: https://claude.ai/share/78a13035-0787-42a5-8643-398b26887e42

cjbgkagh 2 days ago | parent [-]

Lol, you convinced a LLM to agree with you. I use the Big5 as a way of communicating where there is a common reference and a large body of work. How people think they think and how they actually think are two different things, people are much closer to LLMs than they think they are. I can't provide evidence for this for a variety of reasons so at this point we're just going to have to agree to disagree.

js8 a day ago | parent [-]

Actually, it's the other way around - I used LLM to think about it independently to check if my intuition made sense.

I agree with its arguments (and I generally found LLMs argue better than myself, that's why I use them).

It's disappointing that you dismiss it without providing a counterargument.

cjbgkagh 21 hours ago | parent [-]

I have privileged access to information that I cannot share, I would rather keep my access than win some argument online.

thaumasiotes 3 days ago | parent | prev | next [-]

> and agreeable people are more likely to shift their views to agree with those they are talking to

Agreeable people are more likely to shift their expressed views to agree with those they are talking to.

If they're more likely to shift their views, we call them "gullible", not "agreeable".

But this is a distinction you can't apply to language models, which don't have views.

cjbgkagh 3 days ago | parent [-]

Agreeable people are also the most suggestible in that they are the most likely to actually change their views. These traits share the same axis.

root_axis 3 days ago | parent | prev [-]

My point is that LLMs are not humans, so projecting intuitions from human psychology onto LLMs is not helpful.

cjbgkagh 3 days ago | parent [-]

Your point was that humans did not display such behavior even though it has been extensively studied and they do. There is plenty of evidence that highly agreeable people will agree with you on incorrect ideas and conspiracy theories. The name of the trait ‘agreeableness’ is what you’ll need to find such evidence.

danielmarkbruce a day ago | parent | prev [-]

The claim isn't friendly are more prone, it's that they don't push back. Thus idiots with conspiracy theories think people agree with them, validating their ideas.