| ▲ | salawat 6 hours ago | ||||||||||||||||||||||
>The weird part is that we're now basically managing the psychological state of our tooling, Does no one else have ethical alarm bells start ringing hardcore at statements like these? If the damn thing has a measurable psychology, mayhaps it no longer qualifies as merely a tool. Tools don't feel. Tools can't be desperate. Tools don't reward hack. Agents do. Ergo, agents aren't mere tools. | |||||||||||||||||||||||
| ▲ | tananan 23 minutes ago | parent | next [-] | ||||||||||||||||||||||
When we speak of the “despair vectors”, we speak of patterns in the algorithm we can tweak that correspond to output that we recognize as despairing language. You could implement the forward pass of an LLM with pen & paper given enough people and enough time, and collate the results into the same generated text that a GPU cluster would produce. You could then ask the humans to modulate the despair vector during their calculations, and collate the results into more or less despairing variants of the text. I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology” in the sense of a mind experiencing various levels of despair — such as might be needed to consider something a sentient being who might experience pleasure and pain. However, to your point, I do think that there is an ethics to working with agents, in the same sense that there is an ethics of how you should hold yourself in general. You don’t want to — in a burst of anger — throw your hammer because you cannot figure out how to put together a piece of furniture. It reinforces unpleasant, negative patterns in yourself, doesn’t lead to your goal (a nice piece of furniture), doesn’t look good to others (or you, once you’ve cooled off), and might actually cause physical damage in the process. With agents, it’s much easier to break into demeaning, cruel speech, perhaps exactly because you might feel justified they’re not landing on anyone’s ears. But you still reinforce patterns that you wouldn’t want to see in yourself and others, and quite possibly might leak into your words aimed at ears who might actually suffer for it. In that sense, it’s not that different from fantasizing about being cruel to imaginary interlocutors. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | sixo 2 hours ago | parent | prev | next [-] | ||||||||||||||||||||||
The right read here is to realize that psychology alone is not the basis for moral concern towards other humans, and that human psychology is, to a great degree the product of the failure modes of our cognitive machinery, rather than being moral. I find this line of thinking to lead to the conclusion that the moral status of humans derives from our bodies, and in particular from our bodies mirroring others' emotions and pains. Other people suffering is wrong because I empathically can feel it too. | |||||||||||||||||||||||
| |||||||||||||||||||||||
| ▲ | krapp 6 hours ago | parent | prev [-] | ||||||||||||||||||||||
You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology." They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output. But it's just text and text doesn't feel anything. And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans. | |||||||||||||||||||||||
| |||||||||||||||||||||||