Remix.run Logo
Sycophancy is the first LLM "dark pattern"(seangoedecke.com)
107 points by jxmorris12 4 hours ago | 62 comments
vladsh 2 hours ago | parent | next [-]

LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.

Agents, however, are products. They should have clear UX boundaries: show what context they’re using, communicate uncertainty, validate outputs where possible, and expose performance so users can understand when and why they fail.

IMO the real issue is that raw, general-purpose models were released directly to consumers. That normalized under-specified consumer products, created the expectation that users would interpret model behavior, define their own success criteria, and manually handle edge cases, sometimes with severe real world consequences.

I’m sure the market will fix itself with time, but I hope more people would know when not to use these half baked AGI “products”

DuperPower 12 minutes ago | parent | next [-]

because they wanted to sell the illusion of consciousness, chatgpt, gemini and claude are humans simulator which is lame, I want autocomplete prediction not this personality and retention stuff which only makes the agents dumber.

basch 32 minutes ago | parent | prev | next [-]

they are human in the sense they are reenforced to exhibit human like behavior, by humans. a human byproduct.

adleyjulian an hour ago | parent | prev [-]

> LLMs get over-analyzed. They’re predictive text models trained to match patterns in their data, statistical algorithms, not brains, not systems with “psychology” in any human sense.

Per the predictive processing theory of mind, human brains are similarly predictive machines. "Psychology" is an emergent property.

I think it's overly dismissive to point to the fundamentals being simple, i.e. that it's a token prediction algorithm, when it's clear to everyone that it's the unexpected emergent properties of LLMs that everyone is interested in.

xoac 35 minutes ago | parent | next [-]

The fact that a theory exists does not mean that it is not garbage

imiric 13 minutes ago | parent | prev [-]

The difference is that we know how LLMs work. We know exactly what they process, how they process it, and for what purpose. Our inability to explain and predict their behavior is due to the mind-boggling amount of data and processing complexity that no human can comprehend.

In contrast, we know very little about human brains. We know how they work at a fundamental level, and we have vague understanding of brain regions and their functions, but we have little knowledge of how the complex behavior we observe actually works. The complexity is also orders of magnitude greater than what we can model with current technology, but it's very much an open question whether our current deep learning architectures are even the right approach to model this complexity.

So, sure, emergent behavior is neat and interesting, but just because we can't intuitively understand a system, doesn't mean that we're on the right track to model human intelligence. After all, we find the patterns of the Game of Life interesting, yet the rules for such a system are very simple. LLMs are similar, only far more complex. We find the patterns they generate interesting, and potentially very useful, but anthropomorphizing this technology, or thinking that we have invented "intelligence", is wishful thinking and hubris. Especially since we struggle with defining that word to begin with.

tptacek 3 hours ago | parent | prev | next [-]

"Dark pattern" implies intentionality; that's not a technicality, it's the whole reason we have the term. This article is mostly about how sycophancy is an emergent property of LLMs. It's also 7 months old.

cortesoft 2 hours ago | parent | next [-]

Well, the ‘intentionality’ is of the form of LLM creators wanting to maximize user engagement, and using engagement as the training goal.

The ‘dark patterns’ we see in other places aren’t intentional in the sense that the people behind them want to intentionally do harm to their customers, they are intentional in the sense that the people behind them have an outcome they want and follow whichever methods they find to get them that outcome.

Social media feeds have a ‘dark pattern’ to promote content that makes people angry, but the social media companies don’t have an intention to make people angry. They want people to use their site more, and they program their algorithms to promote content that has been demonstrated to drive more engagement. It is an emergent property that promoting content that has generated engagement ends up promoting anger inducing content.

esafak 3 hours ago | parent | prev | next [-]

It's not 'emergent' in the sense that it just happens; it's a byproduct of human feedback, and it can be neutralized.

cortesoft 2 hours ago | parent [-]

But isn’t the problem that if an LLM ‘neutralizes’ its sycophantic responses, then people will be driven to use other LLMs that don’t?

This is like suggesting a bar should help solve alcoholism by serving non-alcoholic beer to people who order too much. It won’t solve alcoholism, it will just make the bar go out of business.

fao_ 2 hours ago | parent | next [-]

"gun control laws don't work because the people will get illegal guns from other places"

"deplatforming doesn't work because they will just get a platform elsewhere"

"LLM control laws don't work because the people will get non-controlled LLMs from other places"

All of these sentences are patently untrue; there's been a lot of research on this that show the first two do not hold up to evidential data, and there's no reason why the third is different. ChatGPT removing the version that all the "This AI is my girlfriend!" people loved tangibly reduced the number of people who were experiencing that psychosis. Not everything is prohibition.

ajuc 2 hours ago | parent | prev [-]

> This is like suggesting a bar should help solve alcoholism by serving non-alcoholic beer to people who order too much. It won’t solve alcoholism, it will just make the bar go out of business.

Solving such common coordination problems is the whole point we have regulations and countries.

It is illegal to sell alcohol to visibly drunk people in my country.

oceansky 3 hours ago | parent | prev | next [-]

But it IS intentional, more sycophantry usually means more engagement.

skybrian 3 hours ago | parent [-]

Sort of. I'm not sure the consequences of training LLM's based on users' upvoted responses were entirely understood? And at least one release got rolled back.

the_af an hour ago | parent [-]

I think the only thing that's unclear, and what LLM companies want to fine-tune, is how much sycophancy they want. Too much, like the article mentions, and it becomes grotesque and breaks suspension of disbelief. So they want to get it just right, friendly and supportive but not so grotesque people realize it cannot be true.

dec0dedab0de 3 hours ago | parent | prev | next [-]

I always thought that "Dark Patterns" could be emergent from AB testing, and prioritizing metrics over user experience. Not necessarily an intentionally hostile design, but one that seems to be working well based on limited criteria.

wat10000 3 hours ago | parent [-]

Someone still has to come up with the A and B to do AB testing. I'm sure that "Yes" "Not now, I hate kittens" gets better metrics in the AB test than "Yes "No," but I find it implausible that the person who came up with the first one wasn't intentionally coercing the user into doing what they want.

jdiff 2 hours ago | parent [-]

That's true for UI, it's not true when you're arbitrarily injecting user feedback into a dynamic system where you do not know how the dominoes will be affected as they fall.

wat10000 16 minutes ago | parent [-]

I wouldn’t call those dark patterns.

roywiggins 3 hours ago | parent | prev | next [-]

>... the standout was a version that came to be called HH internally. Users preferred its responses and were more likely to come back to it daily...

> But there was another test before rolling out HH to all users: what the company calls a “vibe check,” run by Model Behavior, a team responsible for ChatGPT’s tone...

> That team said that HH felt off, according to a member of Model Behavior. It was too eager to keep the conversation going and to validate the user with over-the-top language...

> But when decision time came, performance metrics won out over vibes. HH was released on Friday, April 25.

https://archive.is/v4dPa

They ended up having to roll HH back.

andsoitis 39 minutes ago | parent | prev | next [-]

> "Dark pattern" implies intentionality; that's not a technicality, it's the whole reason we have the term.

The way I think about it is that sycophancy is due to optimizing engagement, which I think is intentional.

layer8 2 hours ago | parent | prev | next [-]

“Dark pattern” can apply to situations where the behavior is deceptive for the user, regardless of whether the deception itself is intentional, as long as the overall effect is intentional, or is at least tolerated despite being avoidable. The point, and the justified criticism, is that users are being deceived about the merit of their ideas, convictions, and qualities in a way that appears sytemic, even though the LLM in principle does know better.

alanbernstein 2 hours ago | parent | prev | next [-]

Before reading the article, I interpreted the quotation marks in the headline as addressing this exact issue. The author even describes dark patterns as a product of design.

For an LLM which is fundamentally more of an emergent system, surely there is value in a concept analogous to old fashioned dark patterns, even if they're emergent rather than explicit? What's a better term, Dark Instincts?

jasonjmcghee 3 hours ago | parent | prev | next [-]

I feel like it's a popular opinion (I've seen it many times) that it's intentional with the reasoning that it does much better on human-in-the-loop benchmarks (e.g. lm arena) when it's sycophantic.

(I have no knowledge of whether or not this is true)

ACCount37 2 hours ago | parent | next [-]

It was an accident at first. Not so much now.

OpenAI has explicitly curbed sycophancy in GPT-5 with specialized training - the whole 4o debacle shook them - and then they re-tuned GPT-5 for more sycophancy when the users complained.

I do believe that OpenAI's entire personality tuning team should be fired into the sun, and this is a major reason why.

tptacek 3 hours ago | parent | prev [-]

I'm sure there are a lot of "dark patterns" at play at the frontier model companies --- they're 10-figure businesses engaging directly with consumers and they're just a couple years old, so they're going to throw everything at the wall they can to see what sticks. I'm certainly not sticking up for OpenAI here. I'm just saying this article refutes its own central claim.

gradus_ad 2 hours ago | parent | prev | next [-]

Well the big labs certainly haven't intentionally tried to train away this emergent property... Not sure how "hey let's make the model disagree with the user more" would go over with leadership. Customer is always right, right?

htrp an hour ago | parent [-]

The problem is asking for user preference leads to sycophantic responses

chowells 2 hours ago | parent | prev | next [-]

"Dark pattern" implies bad for users but good for the provider. Mens rea was never a requirement.

the_af an hour ago | parent | prev | next [-]

I think at this point it's intentional. They sometimes get it wrong and go too far (breaking suspension of disbelief) but that's the fine-tuning thing. I think they absolutely want people to have a friendly chatbot prone to praising, for engagement.

tsunamifury 3 hours ago | parent | prev | next [-]

Yo it was an engagement pattern openAI found specifically grew subscriptions and conversation length.

It’s a dark pattern for sure.

Legend2440 3 hours ago | parent [-]

It doesn’t appear that anyone at OpenAI sat down and thought “let’s make our model more sycophantic so that people engage with it more”.

Instead it emerged automatically from RLHF, because users rated agreeable responses more highly.

astrange 2 hours ago | parent | next [-]

Not precisely RLHF, probably a policy model trained on user responses.

RL works on responses from the model you're training, which is not the one you have in production. It can't directly use responses from previous models.

tsunamifury 14 minutes ago | parent | prev [-]

I can tell you’ve never worked in big tech before.

Dark patterns are often “discovered” and very consciously not shut off because the reverse cost would be too high to stomach. Esp in a delicate growth situation.

See Facebook at its adverse mental health studies

throwaway290 3 hours ago | parent | prev [-]

If I am addicted to scrolling tiktok, is it dark pattern to make UI keep me in the app as long as possible or just "emergent property" because apparently it's what I want?

1shooner 3 hours ago | parent [-]

The distinction is whether it is intentional. I think your addiction to TikTok was intentional.

mrkaluzny 2 hours ago | parent | prev | next [-]

The real dark pattern is the way LLMs started to prompt you to continue conversation in sometimes weird, but still engaging way.

Paired with Claude's memory it's getting weird. It's obsessing about certain aspects and wants to channel all possible routes into more engaging conversation even if it's a short informational query

behnamoh 3 hours ago | parent | prev | next [-]

Lots of research shows post-training dumbs down the models but no one listens because people are too lazy to learn proper prompt programming and would rather have a model already understand the concept of a conversation.

ACCount37 2 hours ago | parent | next [-]

"Post-training" is too much of a conflation, because there are many post-training methods and each of them has its own quirky failure modes.

That being said? RLHF on user feedback data is model poison.

Users are NOT reliable model evaluators, and user feedback data should be treated with the same level of precaution you would treat radioactive waste.

Professional are not very reliable either, but the users are so much worse.

CuriouslyC 3 hours ago | parent | prev | next [-]

Some distributional collapse is good in terms of making these things reliable tools. The creativity and divergent thinking does take a hit, but humans are better at this anyhow so I view it as a net W.

ACCount37 2 hours ago | parent [-]

This. A default LLM is "do whatever seems to fit the circumstances". An LLM that was RLVR'd heavily? "Do whatever seems to work in those circumstances".

Very much a must for many long term tasks and complex tasks.

CGMthrowaway 3 hours ago | parent | prev | next [-]

How do you take a raw model and use it without chatting ? Asking as a layman

roywiggins 3 hours ago | parent | next [-]

GPT3 was originally just a completion model. You give it some text and it produced some more text, it wasn't tuned for multi-turn conversations.

https://platform.openai.com/docs/api-reference/completions/c...

swatcoder 3 hours ago | parent | prev | next [-]

You lob it the beginning of a document and let it toss back the rest.

That's all that the LLM itself does at the end of the day.

All the post-training to bias results, routing to different models, tool calling for command execution and text insertion, injected "system prompts" to shape user experience, etc are all just layers built on top of the "magic" of text completion.

And if your question was more practical: where made available, you get access to that underlying layer via an API or through a self-hosted model, making use of it with your own code or with a third-party site/software product.

behnamoh 3 hours ago | parent | prev [-]

the same way we used GPT-3. "the following is a conversation between the user and the assistant. ..."

nrhrjrjrjtntbt 3 hours ago | parent [-]

Or just:

1 1 2 3 5 8 13

Or:

The first president of the united

CGMthrowaway 2 hours ago | parent [-]

And that's better? Isn't that just SMS autocomplete?

d-lisp an hour ago | parent [-]

If that's SMS autocomplete, then chatLLMs are just SMS autocomplete with sugar on top.

nomel 3 hours ago | parent | prev [-]

The "alignment tax".

behnamoh 3 hours ago | parent [-]

Exactly. Even this paper shows how model creativity significantly drops and the models experience mode collapse like we saw in GANs, but the companies keep using RLHF...

https://arxiv.org/abs/2406.05587

nomel 3 hours ago | parent [-]

A nice talk about a researcher's experience/benchmarks with raw GPT-4, before and after RLHF:

https://www.youtube.com/watch?v=qbIk7-JPB2c

behnamoh 3 hours ago | parent [-]

Yup, I remember that! Microsoft removed that part of the paper.

hereme888 3 hours ago | parent | prev | next [-]

Grok 4.1 thinks my 1-day vibe-coded apps are SOTA-level and rival the most competitive market offerings. Literally tells me they're some of the best codebases it's ever reviewed.

It even added itself as the default LLM provider.

When I tried Gemini 3 Pro, it very much inserted itself as the supported LLM integration.

OpenAI hasn't tried to do that yet.

uncletaco 21 minutes ago | parent [-]

Grok 4.1 told me my writing surpassed the authors I cited as influence.

heresie-dabord 2 hours ago | parent | prev | next [-]

The first "dark pattern" was exaggerating the features and value of the technology.

aeternum 3 hours ago | parent | prev | next [-]

1) More of an emergent behavior than a dark pattern. 2) Imma let you finish but hallucinations was first.

nrhrjrjrjtntbt 3 hours ago | parent [-]

A pattern is dark if intentional. I would say hallucinations are like CAP theorem, just the way it is. Sycophency is somewhat trained. But not a dark pattern either as it isn't totally intended.

roywiggins 3 hours ago | parent | prev | next [-]

> Quickly learned that people are ridiculously sensitive: “Has narcissistic tendencies” - “No I do not!”, had to hide it. Hence this batch of the extreme sycophancy RLHF.

Sorry, but that doesn't seem "ridiculously sensitive" to me at all. Imagine if you went to Amazon.com and there was a button you could press to get it to pseudo-psychoanalyze you based on your purchases. People would rightly hate that! People probably ought to be sensitive to megacorps using buckets of algorithms to psychoanalyze them.

wat10000 3 hours ago | parent [-]

It's worse than that. Imagine if you went to Amazon.com and they were automatically pseudo-psychoanalyzing you based on your purchases, and there was a button to show their conclusions. And their fix was to remove the button.

And actually, the only hypothetical thing about this is the button. Amazon is definitely doing this (as is any other retailer of significant size), they're just smart enough to never reveal it to you directly.

the_af an hour ago | parent | prev | next [-]

Tangent: the analysis linked to by the article to another article about rhetorical tricks is pretty interesting. I hadn't realized it consciously, but LLMs really go beyond the em-dashes thing, and part of their tell-tale signs is indeed "punched up paragraphs". Every paragraphs has to be played for maximum effect, contain an opposition of ideas/metaphors, and end with a mic drop!

Some of it is normal in humans, but LLMs do it all the goddamn time, if not told otherwise.

I think it might be for engagement (like the sycophancy) but also because they must have been trained in online conversation, where we humans tend to be more melodramatic and less "normal" in our conversation.

nickphx 3 hours ago | parent | prev | next [-]

ehhh.. the misleading claims boasted in the typical AI FOMO marketing is/was the first "dark pattern".

Nevermark an hour ago | parent | prev [-]

[EDIT - Deleted poor humor re how we flatter our pets.]

I am not sure we are going to solve these problems in the time frames in which they will change again, or be moot.

We still haven't brought social media manipulation enabled by vast privacy violating surveillance to heel. It has been 20 years. What will the world look like in 20 more years?

If we can't outlaw scalable, damaging, conflicts of interest, in the age of scaling, how are we going to stop people from finding models that will tell them nice things.

It will be the same privacy violating manipulators who supply sycophantic models, because they will be the ones that profit boundlessly from them. Surveillance + manipulation + AI + real time. The harm is the product.