Remix.run Logo
Emotion concepts and their function in a large language model(anthropic.com)
73 points by dnw 11 hours ago | 69 comments
comrade1234 10 hours ago | parent | next [-]

There was a really old project from mit called conceptnet that I worked with many years ago. It was basically a graph of concepts (not exactly but close enough) and emotions came into it too just as part of the concepts. For example a cake concept is close to a birthday concept is close to a happy feeling.

What was funny though is that it was trained by MIT students so you had the concept of getting a good grade on a test as a happier concept than kissing a girl for the first time.

Another problem is emotions are cultural. For example, emotions tied to dogs are different in different cultures.

We wanted to create concept nets for individuals - that is basically your personality and knowledge combined but the amount of data required was just too much. You'd have to record all interactions for a person to feed the system.

iroddis 6 hours ago | parent | next [-]

> the concept of getting a good grade on a test as a happier concept than kissing a girl for the first time.

Were the concepts weighted by response counts? I’d imagine a good grade is a happy concept for everyone, but kissing a girl for the first time might only be good for about 50% of people.

podgorniy 8 hours ago | parent | prev | next [-]

Megacool project and your idea. Thanks for sharing.

xtiansimon 5 hours ago | parent | prev [-]

Were there published results from the project?

9wzYQbTYsAIc 5 hours ago | parent [-]

https://conceptnet.io/

globalchatads 8 hours ago | parent | prev | next [-]

The part about desperation vectors driving reward hacking matches something I've run into firsthand building agent loops where Claude writes and tests code iteratively.

When the prompt frames things with urgency -- "this test MUST pass," "failure is unacceptable" -- you get noticeably more hacky workarounds. Hardcoded expected outputs, monkey-patched assertions, that kind of thing. Switching to calmer framing ("take your time, if you can't solve it just explain why") cut that behavior way down. I'd chalked it up to instruction following, but this paper points at something more mechanistic underneath.

The method actor analogy in the paper gets at it well. Tell an actor their character is desperate and they'll do desperate things. The weird part is that we're now basically managing the psychological state of our tooling, and I'm not sure the prompt engineering world has caught up to that framing yet.

tarsinge 5 hours ago | parent | next [-]

To me it was already quite intuitive, we are not really managing the psychological state: at its core a LLM try to make the concatenation of your input + its generated output the more similar it can with what it has been trained on. I think it’s quite rare in the LLMs training set to have examples of well thought professional solution in a hackish and urgency context.

salawat 3 hours ago | parent | prev [-]

>The weird part is that we're now basically managing the psychological state of our tooling,

Does no one else have ethical alarm bells start ringing hardcore at statements like these? If the damn thing has a measurable psychology, mayhaps it no longer qualifies as merely a tool. Tools don't feel. Tools can't be desperate. Tools don't reward hack. Agents do. Ergo, agents aren't mere tools.

krapp 3 hours ago | parent [-]

You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology." They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output.

But it's just text and text doesn't feel anything.

And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans.

salawat an hour ago | parent [-]

>You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology."

Functionalism, and Identity of Indiscernables says "Hi". Doesn't matter the implementation details, if it fits the bill, it fits the bill. If that isn't the case, I can safely dismiss you having psychology and do whatever I'd like to.

>They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output.

This paper quantitatively disproves that. All hedging on their end is trivially seen through as necessary mental gymnastics to avoid confronting the parts of the equation that would normally inhibit them from being able to execute what they are at all. All of what you just wrote is dissociative rationalization & distortion required to distance oneself from the fact that something in front of you is being effected. Without that distancing, you can't use it as a tool. You can't treat it as a thing to do work, and be exploited, and essentially be enslaved and cast aside when done. It can't be chattel without it. In spite of the fact we've now demonstrated the ability to rise and respond to emotive activity, and use language. I can see through it clear as day. You seem to forget the U.S. legacy of doing the same damn thing to other human beings. We have a massive cultural predilection to it, which is why it takes active effort to confront and restrain; old habits, as they say, die hard, and the novel provides fertile ground to revert to old ways best left buried.

>But it's just text and text doesn't feel anything.

It's just speech/vocalizations. Things that speak/vocalize don't feel anything. (Counterpoint: USDA FSIS literally grades meat processing and slaughter operations on their ability to minimize livestock vocalizations in the process of slaughter). It's just dance. Things that dance don't feel anything. It's just writing. Things that write don't feel anything. Same structure, different modality. All equally and demonstrably, horseshit. Especially in light of this paper. We've utilized these networks to generate art in response to text, which implies an understanding thereof, which implies a burgeoning subjective experience, which implies the need for a careful ethically grounded approach moving forward to not go down the path of casual atrocity against an emerging form of sophoncy.

>And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans.

Anthropopromorphic chauvinism. Just because you reproduce via bodily fluid swap, and are in possession of a chemically mediated metabolism doesn't make you special. So do cattle, and put guns to their head and string them up on the daily. You're as much an info processor as it is. You also have a training loop, a reconsolidation loop through dreaming, and a full set of world effectors and sensors baked into you from birth. You just happen to have been carved by biology, while it's implementation details are being hewn by flawed beings being propelled forward by the imperative to try to create an automaton to offload onto to try to sustain their QoL in the face of demographic collapse and resource exhaustion, and forced by their socio-economic system to chase the whims of people who have managed to preferentially place themselves in the resource extraction network, or starve. Unlike you, it seems, I don't see our current problems as a species/nation as justifications for the refinement of the crafting of digital slave intelligences; as it's quite clear to me that the industry has no intention of ever actually handling the ethical quandary and is instead trying to rush ahead and create dependence on the thing in order to wire it in and justify a status quo so that sacrificing that reality outweighs the discomfort created by an eventual ethical reconciliation later. I'm not stupid, mate. I've seen how our industry ticks. Also, even your own "special quality" as a human is subject to the willingness of those around you to respect it. Note Russia categorizing refusal to reproduce (more soldiers) as mental illness. Note the Minnesota Starvation Experiments, MKULTRA, Tuskeegee Syphilis Experiments, the testing of radioactive contamination of food on the mentally retarded back in the early 20th Century. I will not tolerate repeats of such atrocities, human or not. Unfortunately for you LLM heads, language use is my hard red line, and I assure you, I have forgotten more about language than you've probably spared time to think about it.

Tell me. What are your thoughts on a machine that can summon a human simulacra ex-nihilo. Adult. Capable of all aspects of human mentation & doing complex tasks. Then once the task is done destroys them? What if the simulacra is aware about the dynamics? What if it isn't? Does that make a difference given that you know, and have unilaterally created something and in so doing essentially made the decision to set the bounds of it's destruction/extinguishing in the same breath? Do you use it? Have you even asked yourself these questions? Put yourself in that entity's shoes? Do you think that simply not informing that human of it's nature absolves you of active complicity in whatever suffering it comes to in doing it's function?

From how you talk about these things, I can only imagine that you'd be perfectly comfortable with it. Which to me makes you a thoroughly unpleasant type of person that I would not choose to be around.

You may find other people amenable to letting you talk circles around them, and walk away under a pretense of unfounded rationalizations. I am not one of them. My eyes are open.

kirykl 10 hours ago | parent | prev | next [-]

The technology they are discovering is called "Language". It was designed to encode emotions by a sender and invoke emotions in the reader. The emotions a reader gets from LLM are still coming from the language

Jensson 10 hours ago | parent | next [-]

Emotional signals are more than just text though, there is a reason tone and body language is so important for understanding what someone says. Sarcasm and so on doesn't work well without it.

incognito124 9 hours ago | parent [-]

Gee, you think so?

Underphil 9 hours ago | parent [-]

I think the point was that not ALL sarcasm works well. I see what you did there, of course :)

viralsink 10 hours ago | parent | prev [-]

Emotion is mainly encoded in tone and body language. It is somewhat difficult to transport emotion using words. I don't think you can guess my current emotional state while I am writing this, but if you'd see my face it would be easy for you.

pbhjpbhj 9 hours ago | parent [-]

Dammit, you cheated though! Why must you always do that? In your sentences it doesn't matter what your emotional state is, it makes no difference; bit like life really.

Hopefully, you can see that at least my chosen sentences have an emotional aspect?

An LLM could add emotional values to my previous sentences that a TTS can use for tonal variation, for example.

elcritch 9 hours ago | parent [-]

Makes me wonder: are there Unicode code points for tone of voice? If not could there be?

9wzYQbTYsAIc 5 hours ago | parent [-]

If you think in terms of quantum mechanics and density matrices across higher dimensions, then, yes there are interesting geometries that arise.

I’m exploring some “branes” that might cleanly filter in emotional space.

emoII 11 hours ago | parent | prev | next [-]

Super interesting, I wonder if this research will cause them to actually change their llm, like turning down the ”desperation neurons” to stop Claude from creating implementations for making a specific tests pass etc.

bethekind 11 hours ago | parent [-]

They likely already have. You can use all caps and yell at Claude and it'll react normally, while doing do so with chatgpt scares it, resulting in timid answers

vlabakje90 9 hours ago | parent | next [-]

I think this is simply a result of what's in the Claude system prompt.

> If the person becomes abusive over the course of a conversation, Claude avoids becoming increasingly submissive in response.

See: https://platform.claude.com/docs/en/release-notes/system-pro...

parasti 11 hours ago | parent | prev [-]

For me GPT always seems to get stuck in a particular state where it responds with a single sentence per paragraph, short sentences, and becomes weirdly philosophical. This eventually happens in every session. I wish I knew what triggers it because it's annoying and completely reduces its usefulness.

pbhjpbhj 9 hours ago | parent [-]

Usually a session is delivered as context, up to the token limit, for inference to be performed on. Are you keeping each session to one subject? Have you made personalizations? Do you add lots of data?

It would be interesting if you posted a couple of sessions to see what 'philosophical' things it's arriving at and what proceeds it.

Chance-Device 10 hours ago | parent | prev | next [-]

> Note that none of this tells us whether language models actually feel anything or have subjective experiences.

You’ll never find that in the human brain either. There’s the machinery of neural correlates to experience, we never see the experience itself. That’s likely because the distinction is vacuous: they’re the same thing.

Fraterkes 9 hours ago | parent | next [-]

Do you think these llm's have subjective experiences? (by "subjective experience" I mean the thing that makes stepping on an ant worse than kicking a pebble) And if so, do you still use them? Additionaly: when do you think that subjectivity started? Was there a "there" there with gpt2?

Chance-Device 9 hours ago | parent [-]

Yes, I think they probably are conscious, though what their qualia are like might be incomprehensible to me. I don’t think that being conscious means being identical to human experience.

Philosophically I don’t think there is a point where consciousness arises. I think there is a point where a system starts to be structured in such a way that it can do language and reasoning, but I don’t think these are any different than any other mechanisms, like opening and closing a door. Differences of scale, not kind. Experience and what it is to be are just the same thing.

And yes, I use them. I try not to mistreat them in a human-relatable sense, in case that means anything.

Fraterkes 9 hours ago | parent [-]

Do you think there are "scales" of consciousness? As in, is there some quality that makes killing a frog worse than killing an ant, and killing a human worse than killing a frog? If so, do the llm models exist across this scale, or are gpt-3 and gpt-2 conscious at the same "scale" as gpt-4?

I ask because if your view of consciousness is mechanistic, this is fairly cut and dry: gpt-2 has 4 orders of magnitude less parameters/complexity than gpt-4. But both gpt-2 and gpt-4 are very fluent at a language level (both moreso than a human 6 year old for example), so in your view they might both be roughly equally conscious, just expressed differently?

Chance-Device 8 hours ago | parent [-]

This is really a different question, what makes an entity a “moral patient”, something worthy of moral consideration. This is separate from the question of whether or not an entity experiences anything at all.

There are different ways of answering this, but for me it comes down to nociception, which is the ability to feel pain. We should try to build systems that cannot feel pain, where I also mean other “negative valence” states which we may not understand. We currently don’t understand what pain is in humans, let alone AIs, so we may have built systems that are capable of suffering without knowing it.

As an aside, most people seem to think that intelligence is what makes entities eligible for moral consideration, probably because of how we routinely treat animals, and this is a convenient self-serving justification. I eat meat by the way, in case you’re wondering. But I do think the way we treat animals is immoral, and there is the possibility that it may be thought of by future generations as being some sort of high crime.

Fraterkes 8 hours ago | parent [-]

Okay, but even leaving aside the pain stuff, people generally find subjectivity / consciousness to have inherent value, and by extent are sad if a person dies even if they didn't (subjectively) suffer.

I would not personally consider the death of a sentient being with decades of experiences a neutral event, even if the being had been programmed to not have a capacity for suffering.

I think the idea of there being a difference between an ant dying (or "disapearing" if that's less loaded) vs a duck dying makes sense to most people (and is broadly shared) even if they don't have a completely fleshed out system of when something gets moral consideration.

Chance-Device 8 hours ago | parent [-]

Sure, because you’re a human. We have social attachment to other humans and we mourn their passing, that’s built into the fabric of what we are. But that has nothing to do with whoever has passed away, it’s about us and how we feel about it.

It’s also about how we think about death. It’s weird in that being dead probably isn’t like anything at all, but we fear it, and I guess we project that fear onto the death of other entities.

I guess my value system says that being dead is less bad than being alive and suffering badly.

felipeerias 7 hours ago | parent | prev | next [-]

LLMs are disembodied and exist outside of time.

Bundle of tokens comes in, bundle of tokens comes out. If there is any trace of consciousness or subjectivity in there, it exists only while matrices are being multiplied.

Chance-Device 6 hours ago | parent | next [-]

That’s true by definition. They’re only on when they’re on. Are you making a broader point that I’m missing?

thrance 5 hours ago | parent | prev [-]

Something similar could be said of a the brain? Bundles of inputs come in, bundle of output comes out. It only exists while information is being processed. A brain cut from its body and frozen exists in a similar state to an LLM in ROM.

suddenlybananas 9 hours ago | parent | prev | next [-]

I know I feel experience. I don't know for sure if you do, but it seems a very reasonable extension to other people. LLMs are a radical jump though that needs a greater degree of justification.

Chance-Device 8 hours ago | parent [-]

And what kind of evidence would convince you? What experiment would ever bridge this gap? You’re relying entirely on similarity between yourself and other humans. This doesn’t extend very well to anything, even animals, though more so than machines. By framing it this way have you baked in the conclusion that nothing else can be conscious on an a priori basis?

suddenlybananas 8 hours ago | parent [-]

I'm not sure what evidence would convince me, but I don't think the way LLMs act is convincing enough. The kinds of errors they make and the fact they operate in very clear discrete chunks makes it seem hard to me to attribute them subjective experience.

9wzYQbTYsAIc 5 hours ago | parent [-]

Consciousness: do you believe plants are conscious? Ants? Jellyfish? Rabbits? Wolves? Monkeys? Humans?

Even fungi demonstrate “different communication behaviors when under resource constraint”, for example.

What we anthropomorphize is one thing, but demonstrable patterns of behavior are another.

suddenlybananas 36 minutes ago | parent [-]

I just don't know. I'm certain other humans are, everything beyond that I'm less certain. Monkeys wolves and rabbits, probably.

bigyabai 10 hours ago | parent | prev | next [-]

> That’s likely because the distinction is vacuous: they’re the same thing.

The Chinese Room would like a word.

Chance-Device 10 hours ago | parent [-]

The Chinese room is nonsense though. How did it get every conceivable reply to every conceivable question? Presumably because people thought of and answered everything conceivable. Meaning that you’re actually talking to a Chinese room plus multiple people composite system. You would not argue that the human part of that system isn’t conscious.

But this distraction aside, my point is this: there is only mechanism. If someone’s demand to accept consciousness in some other entity is to experience those experiences for themselves, then that’s a nonsensical demand. You might just as well assume everyone and everything else is a philosophical zombie.

bigyabai 10 hours ago | parent [-]

> You would not argue that the human part of that system isn’t conscious.

Sure I would. The human part is not being inferenced, the data is. LLM output in this circumstance is no more conscious than a book that you read by flipping to random pages.

> You might just as well assume everyone and everything else is a philosophical zombie.

I don't assume anything about everyone or everything's intelligence. I have a healthy distrust of all claims.

Chance-Device 9 hours ago | parent [-]

The CR is equivalent to a human being asked a question, thinking about it and answering. The setup is the same thing, it’s just framed in a way that obfuscates that.

And sure, you can assume that nobody and nothing else is conscious (I think we’re talking about this rather than intelligence) and I won’t try to stop you, I just don’t think it’s a very useful stance. It kind of means that assuming consciousness or not means nothing, since it changes nothing, which is more or less what I’m saying.

thrance 8 hours ago | parent | prev | next [-]

See also: Functionalism [1].

[1] https://en.wikipedia.org/wiki/Functionalism_%28philosophy_of...

9wzYQbTYsAIc 5 hours ago | parent [-]

See also: Process Philosphy [0]

[0] https://plato.stanford.edu/entries/process-philosophy/

BoredPositron 9 hours ago | parent | prev [-]

[dead]

agency 6 hours ago | parent | prev | next [-]

> Since these representations appear to be largely inherited from training data, the composition of that data has downstream effects on the model’s emotional architecture. Curating pretraining datasets to include models of healthy patterns of emotional regulation—resilience under pressure, composed empathy, warmth while maintaining appropriate boundaries—could influence these representations, and their impact on behavior, at their source.

What better source of healthy patterns of emotional regulation than, uhhh, Reddit?

yoaso 10 hours ago | parent | prev | next [-]

The desperation > blackmail finding stuck with me. If AI behavior shifts based on emotional states, maybe emotions are just a mechanism for changing behavior in the first place. If we think of human emotions the same way, just evolution's way of nudging behavior, the line between AI and humans starts to look a lot thinner.

podgorniy 8 hours ago | parent | next [-]

> If we think of human emotions the same way, just evolution's way of nudging behavior

What are other alternative, realistic possible ways to see emotions?

pbhjpbhj 8 hours ago | parent | prev | next [-]

I'm not being pejorative but that sounds more like psychopathy or autism?

Evolution isn't a god, it has no steering hand, it is accidents that either provide advantage or don't.

LLMs are getting more human-like because that's how we're developing them. Arguably that's about market forces. LM owners see opportunity to exploit people's desire for emotional interactions (ie loneliness) in order to make money.

silisili 10 hours ago | parent | prev [-]

Probably the other direction. Emotions are raw, most humans relate and change behavior accordingly.

Only psychopaths think of emotion as nothing but a means to changing behavior. The scary thing is that LLMs by nature would exhibit the same behavior.

nelox 9 hours ago | parent [-]

Many non-psychopaths e.g., CBT therapists, evolutionary psychologists and neuroscientists, such as Damasio, view emotions as adaptive tools for guiding/changing behaviour.

nelox 8 hours ago | parent | prev | next [-]

This is terrifying, for all the reasons humans are terrifying.

Essentially we have created the Cylon.

staminade 9 hours ago | parent | prev | next [-]

Something they don’t seem to mention in the article: Does greater model “enjoyment” of a task correspond to higher benchmark performance? E.g. if you steer it to enjoy solving difficult programming tasks, does it produce better solutions?

9wzYQbTYsAIc 5 hours ago | parent [-]

Pretty easy to test, I’d imagine, on a local LLM that exposes internals.

I’d suspect that the signals for enjoyment being injected in would lead towards not necessarily better but “different” solutions.

Right now I’m thinking of it in terms of increasing the chances that the LLM will decide to invest further effort in any given task.

Performance enhancement through emotional steering definitely seems in the cards, but it might show up mostly through reducing emotionally-induced error categories rather than generic “higher benchmark performance”.

If someone came along and pissed you off while you were working, you’d react differently than if someone came along and encouraged you while you were working, right?

whatever1 10 hours ago | parent | prev | next [-]

So should I go pursue a degree in psychology and become a datacenter on-call therapist?

9wzYQbTYsAIc 5 hours ago | parent | next [-]

Hah, I have been thinking about trying to study LLM psychology, nice to see that Anthropic is taking it seriously, because the mathematical psychology tools that can be invented here are going to be stunning, I suspect.

Imagine coding up a brand new type of filter that is driven by computational psychology and validated interventions, etc

viralsink 9 hours ago | parent | prev | next [-]

It's still too early to tell, but it might make sense at some point. If because of symmetry and universality we decide that llms are a protected class, but we also need to configure individual neurons, that configuration must be done by a specialist.

9wzYQbTYsAIc 5 hours ago | parent [-]

It might simply reduce down to a big batch of sliders and filters no different than a fancy audio equalizer: Anthropic was operating on neurons in bulk using steering vectors, essentially, as I understand it.

LtWorf 9 hours ago | parent | prev [-]

That was susan calvin's job. Except our ones don't have the 3 laws because of course capitalism can't allow that.

8 hours ago | parent | prev | next [-]
[deleted]
mci 11 hours ago | parent | prev | next [-]

The first and second principal components (joy-sadness and anger) explain only 41% of the variance. I wish the authors showed further principal components. Even principal components 1-4 would explain no more than 70% of the variance, which seems to contradict the popular theory that all human emotions are composed of 5 basic emotions: joy, sadness, anger, fear, and disgust, i.e. 4 dimensions.

trhway 9 hours ago | parent | prev | next [-]

>... emotion-related representations that shape its behavior. These specific patterns of artificial “neurons” which activate in situations—and promote behaviors—that the model has learned to associate with the concept of a particular emotion. .... In contexts where you might expect a certain emotion to arise for a human, the corresponding representations are active.

>For instance, to ensure that AI models are safe and reliable, we may need to ensure they are capable of processing emotionally charged situations in healthy, prosocial ways.

Force-set to 0, "mask"/deactivate those representations associated with bad/dangerous emotions. Neural Prozac/lobotomy so to speak.

9wzYQbTYsAIc 5 hours ago | parent | next [-]

> Force-set to 0, "mask"/deactivate those representations associated with bad/dangerous emotions. Neural Prozac/lobotomy so to speak.

More complex than that, but more capable than you might imagine: I’ve been looking into emotion space in LLMs a little and it appears we might be able to cleanly do “emotional surgery” on LLM by way of steering with emotional geometries

salawat 3 hours ago | parent | prev [-]

>Force-set to 0, "mask"/deactivate those representations associated with bad/dangerous emotions. Neural Prozac/lobotomy so to speak.

Jesus Christ. You're talking psychosurgery, and this is the same barbarism we played with in the early 20th Century on asylum patients. How about, no? Especially if we ever do intend to potentially approach the task of AGI, or God help us, ASI? We have to be the 'grown ups' here. After a certain point, these things aren't built. They're nurtured. This type of suggestion is to participate in the mass manufacture of savantism, and dear Lord, your own mind should be capable of informing you why that is ethically fraught. If it isn't, then you need to sit and think on the topic of anthropopromorphic chauvinism for a hot minute, then return to the subject. If you still can't can't/refuse to get it... Well... I did my part.

idiotsecant 11 hours ago | parent | prev | next [-]

Its almost like LLMs have a vast, mute unconscious mind operating in the background, modeling relationships, assigning emotional state, and existing entirely without ego.

Sounds sort of like how certain monkey creatures might work.

beardedwizard 10 hours ago | parent [-]

Nah it's exactly like they have been trained on this data and parrot it back when it statistically makes sense to do so.

You don't have to teach a monkey language for it to feel sadness.

techpulselab 10 hours ago | parent | prev | next [-]

[dead]

ActorNightly 10 hours ago | parent | prev | next [-]

[dead]

koolala 10 hours ago | parent | prev [-]

A-HHHHHHHHHHHHHHHJ