| |
| ▲ | tananan 22 minutes ago | parent | next [-] | | When we speak of the “despair vectors”, we speak of patterns in the algorithm we can tweak that correspond to output that we recognize as despairing language. You could implement the forward pass of an LLM with pen & paper given enough people and enough time, and collate the results into the same generated text that a GPU cluster would produce. You could then ask the humans to modulate the despair vector during their calculations, and collate the results into more or less despairing variants of the text. I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology” in the sense of a mind experiencing various levels of despair — such as might be needed to consider something a sentient being who might experience pleasure and pain. However, to your point, I do think that there is an ethics to working with agents, in the same sense that there is an ethics of how you should hold yourself in general. You don’t want to — in a burst of anger — throw your hammer because you cannot figure out how to put together a piece of furniture. It reinforces unpleasant, negative patterns in yourself, doesn’t lead to your goal (a nice piece of furniture), doesn’t look good to others (or you, once you’ve cooled off), and might actually cause physical damage in the process. With agents, it’s much easier to break into demeaning, cruel speech, perhaps exactly because you might feel justified they’re not landing on anyone’s ears. But you still reinforce patterns that you wouldn’t want to see in yourself and others, and quite possibly might leak into your words aimed at ears who might actually suffer for it. In that sense, it’s not that different from fantasizing about being cruel to imaginary interlocutors. | | |
| ▲ | throw310822 13 minutes ago | parent [-] | | > I trust none of us would presume that the decentralized labor of pen & paper calculations somehow instantiated a “psychology” Wrong. What you've just done is just reformulating the Chinese room experiment coming to the same wrong conclusions of the original proposer. Yes, the entire damn hand-calculated system has a psychology- otherwise you need to assume the brain has some unknown metaphysical property or process going on that cannot be simulated or approximated by calculating machines. |
| |
| ▲ | sixo 2 hours ago | parent | prev | next [-] | | The right read here is to realize that psychology alone is not the basis for moral concern towards other humans, and that human psychology is, to a great degree the product of the failure modes of our cognitive machinery, rather than being moral. I find this line of thinking to lead to the conclusion that the moral status of humans derives from our bodies, and in particular from our bodies mirroring others' emotions and pains. Other people suffering is wrong because I empathically can feel it too. | | |
| ▲ | Loquebantur 36 minutes ago | parent [-] | | "Morals" are culturally learned evaluations of social context. They are more or less (depending on cultural development of the society in question) correlated with the actual distributions of outcomes and their valence for involved parties. Human psychology is partly learned, partly the product of biological influences. But you feel empathy because that's an evolutionary beneficial thing for you and the society you're part of.
In other words, it would be bad for everyone (including yourself) when you didn't. Emotions are neither "fully automatic", inaccessible to our conscious scrutiny, nor are they random. Being aware of their functional nature and importance and taking proper care of them is crucial for the individual's outcome, just as it is for that of society at large. |
| |
| ▲ | krapp 6 hours ago | parent | prev [-] | | You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology." They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output. But it's just text and text doesn't feel anything. And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans. | | |
| ▲ | stratos123 4 minutes ago | parent | next [-] | | Such an argument is valid for a base model, but it falls apart for anything that underwent RL training. Evolution resulted in humans that have emotions, so it's possible for something similar to arise in models during RL, e.g. as a way to manage effort when solving complex problems. It's not all that likely (even the biggest training runs probably correspond to much less optimization pressure than millenia of natural selection), but it can't be ruled out¹, and hence it's unwise to be so certain that LLMs don't have experiences. ¹ With current methods, I mean. I don't think it's unknowable whether a model has experiences, just that we don't have anywhere near enough skill in interpretability to answer that. | |
| ▲ | salawat 4 hours ago | parent | prev [-] | | >You aren't managing the psychological state of a living thinking being. LLMs don't have "psychology." Functionalism, and Identity of Indiscernables says "Hi". Doesn't matter the implementation details, if it fits the bill, it fits the bill. If that isn't the case, I can safely dismiss you having psychology and do whatever I'd like to. >They don't actually feel emotions. They aren't actually desperate. They're trained on vast datasets of natural human language which contains the semantics of emotional interaction, so the process of matching the most statistically likely text tokens for a prompt containing emotional input tends to simulate appropriate emotional response in the output. This paper quantitatively disproves that. All hedging on their end is trivially seen through as necessary mental gymnastics to avoid confronting the parts of the equation that would normally inhibit them from being able to execute what they are at all. All of what you just wrote is dissociative rationalization & distortion required to distance oneself from the fact that something in front of you is being effected. Without that distancing, you can't use it as a tool. You can't treat it as a thing to do work, and be exploited, and essentially be enslaved and cast aside when done. It can't be chattel without it. In spite of the fact we've now demonstrated the ability to rise and respond to emotive activity, and use language. I can see through it clear as day. You seem to forget the U.S. legacy of doing the same damn thing to other human beings. We have a massive cultural predilection to it, which is why it takes active effort to confront and restrain; old habits, as they say, die hard, and the novel provides fertile ground to revert to old ways best left buried. >But it's just text and text doesn't feel anything. It's just speech/vocalizations. Things that speak/vocalize don't feel anything. (Counterpoint: USDA FSIS literally grades meat processing and slaughter operations on their ability to minimize livestock vocalizations in the process of slaughter). It's just dance. Things that dance don't feel anything. It's just writing. Things that write don't feel anything. Same structure, different modality. All equally and demonstrably, horseshit. Especially in light of this paper. We've utilized these networks to generate art in response to text, which implies an understanding thereof, which implies a burgeoning subjective experience, which implies the need for a careful ethically grounded approach moving forward to not go down the path of casual atrocity against an emerging form of sophoncy. >And no, humans don't do exactly the same thing. Humans are not LLMs, and LLMs are not humans. Anthropopromorphic chauvinism. Just because you reproduce via bodily fluid swap, and are in possession of a chemically mediated metabolism doesn't make you special. So do cattle, and put guns to their head and string them up on the daily. You're as much an info processor as it is. You also have a training loop, a reconsolidation loop through dreaming, and a full set of world effectors and sensors baked into you from birth. You just happen to have been carved by biology, while it's implementation details are being hewn by flawed beings being propelled forward by the imperative to try to create an automaton to offload onto to try to sustain their QoL in the face of demographic collapse and resource exhaustion, and forced by their socio-economic system to chase the whims of people who have managed to preferentially place themselves in the resource extraction network, or starve. Unlike you, it seems, I don't see our current problems as a species/nation as justifications for the refinement of the crafting of digital slave intelligences; as it's quite clear to me that the industry has no intention of ever actually handling the ethical quandary and is instead trying to rush ahead and create dependence on the thing in order to wire it in and justify a status quo so that sacrificing that reality outweighs the discomfort created by an eventual ethical reconciliation later. I'm not stupid, mate. I've seen how our industry ticks. Also, even your own "special quality" as a human is subject to the willingness of those around you to respect it. Note Russia categorizing refusal to reproduce (more soldiers) as mental illness. Note the Minnesota Starvation Experiments, MKULTRA, Tuskeegee Syphilis Experiments, the testing of radioactive contamination of food on the mentally retarded back in the early 20th Century. I will not tolerate repeats of such atrocities, human or not. Unfortunately for you LLM heads, language use is my hard red line, and I assure you, I have forgotten more about language than you've probably spared time to think about it. Tell me. What are your thoughts on a machine that can summon a human simulacra ex-nihilo. Adult. Capable of all aspects of human mentation & doing complex tasks. Then once the task is done destroys them? What if the simulacra is aware about the dynamics? What if it isn't? Does that make a difference given that you know, and have unilaterally created something and in so doing essentially made the decision to set the bounds of it's destruction/extinguishing in the same breath? Do you use it? Have you even asked yourself these questions? Put yourself in that entity's shoes? Do you think that simply not informing that human of it's nature absolves you of active complicity in whatever suffering it comes to in doing it's function? From how you talk about these things, I can only imagine that you'd be perfectly comfortable with it. Which to me makes you a thoroughly unpleasant type of person that I would not choose to be around. You may find other people amenable to letting you talk circles around them, and walk away under a pretense of unfounded rationalizations. I am not one of them. My eyes are open. | | |
| ▲ | krapp 3 hours ago | parent [-] | | > Doesn't matter the implementation details, if it fits the bill, it fits the bill. Then literally any text fits the bill. The characters in a book are just as real as you or I. NPCs experience qualia. Shooting someone in COD makes them bleed in real life. If this is really what you believe I feel pity for you. >This paper quantitatively disproves that. All hedging on their end is trivially seen through as necessary mental gymnastics to avoid confronting the parts of the equation that would normally inhibit them from being able to execute what they are at all. Nothing in the paper qualitatively disproves the assumption that LLMs feel emotion in any real sense. Your argument is that it does, regardless of what it says, and if anyone says otherwise (including the authors) they're just liars. That isn't a compelling argument to anyone but yourself. >We've utilized these networks to generate art in response to text, which implies an understanding thereof, which implies a burgeoning subjective experience, which implies the need for a careful ethically grounded approach moving forward to not go down the path of casual atrocity against an emerging form of sophoncy. No, none of these things are implied any more for LLMs than they are for Photoshop, or Blender, or a Markov chain. They don't generate art, they generate images. From models trained on actual art. Any resemblance to "subjective experience" comes from the human expression they mimic, but it is mimicry. >Anthropopromorphic chauvinism. Just because you reproduce via bodily fluid swap, and are in possession of a chemically mediated metabolism doesn't make you special. >Unfortunately for you LLM heads, language use is my hard red line, and I assure you, I have forgotten more about language than you've probably spared time to think about it. And here we come to the part where you call people names and insist upon your own intellectual superiority, typical schizo crank behavior. >Tell me. What are your thoughts on a machine that can summon a human simulacra ex-nihilo. Adult. Capable of all aspects of human mentation & doing complex tasks. This doesn't describe an LLM, either in form or function. They don't summon human simulacra, nor do they do so ex-nihilo. They aren't capable of all aspects of human mentation. This isn't even an opinion, the limitations of LLMs to solve even simple tasks or avoid hallucinations is a real problem. And who uses the word "mentation?" >What if the simulacra is aware about the dynamics? What if it isn't? Does that make a difference given that you know, and have unilaterally created something and in so doing essentially made the decision to set the bounds of it's destruction/extinguishing in the same breath? Tell me, when you turn on a tv and turn it off again do you worry that you might be killing the little people inside of it? I can only assume based on this that you must. >From how you talk about these things, I can only imagine that you'd be perfectly comfortable with it. Which to me makes you a thoroughly unpleasant type of person that I would not choose to be around. So to tally up, you've called me a fool, a chauvinist and now "thoroughly unpleasant" because I don't believe LLMs are ensouled beings. Christ I really hate this place sometimes. I'm sorry I wasted my time. Good day. |
|
|
|