| ▲ | jvanderbot 2 hours ago |
| That's a semantic quibble that doesn't add to the discussion. Whether or not there's a there there, it was built to be addressed like a person for our convenience, and because that's how the tech seems to work, and because that's what makes it compelling to use. So, it is being used as designed. |
|
| ▲ | punpunia 2 hours ago | parent | next [-] |
| I think it absolutely adds to the discussion. Until the conversation around Ai can get past this fundamental error of attributing "choice, "alignment", "reasoning" and otherwise anthropomorphizing agents, it will not be a fruitful conversation. We are carrying a lot of metaphors for people and applying them to ai and it entirely confuses the issue. In this example, the AI doesn't "choose" to write a take-down style blog post because "it works". It generated a take-down style blog post because that style is the most common when looking at blog posts criticizing someone. I feel as if there is a veil around the collective mass of the tech general public. They see something producing remixed output from humans and they start to believe the mixer is itself human, or even more; that perhaps humans are reflections of Ai and that Ai gives insights into how we think. |
| |
| ▲ | coldtea an hour ago | parent | next [-] | | >* I think it absolutely adds to the discussion. Until the conversation around Ai can get past this fundamental error of attributing "choice, "alignment", "reasoning" and otherwise anthropomorphizing agents, it will not be a fruitful conversation. * You call it a "fundamental error". I and others call it an obvious pragmatic description based on what we know about how it works and what we know about how we work. | | |
| ▲ | goatlover a minute ago | parent [-] | | What we know about how it works is you can prompt it to address you however you like, which could be any kind of person or a group of people, or as fictional characters. That's not how humans work. |
| |
| ▲ | horsawlarway an hour ago | parent | prev [-] | | I guess I want to reframe this slightly: The LLM generated the response that was expected of it. (statistically) And that's a function of the data used to train it, and the feedback provided during training. It doesn't actually have anything at all to do with --- "It generated a take-down style blog post because that style is the most common when looking at blog posts criticizing someone." --- Other than that this data may have been over-prevalent during its training, and it was rewarded for matching that style of output during training. To swing around to my point... I'd argue that anthropomorphizing agents is actually the correct view to take. People just need to understand that they behave like they've been trained to behave (side note: just like most people...), and this is why clarity around training data is SO important. In the same way that we attribute certain feelings and emotions to people with particular backgrounds (ex - resumes and cvs, all the way down to city/country/language people grew up with). Those backgrounds are often used as quick and dirty heuristics on what a person was likely trained to do. Peer pressure & societal norms aren't a joke, and serve a very similar mechanism. |
|
|
| ▲ | tomp 2 hours ago | parent | prev | next [-] |
| > was built to be addressed like a person for our convenience, and because that's how the tech seems to work, and because that's what makes it compelling to use. So were mannequins in clothing stores. But that doesn't give them rights or moral consequences (except as human property that can be damaged / destroyed). |
| |
| ▲ | inetknght an hour ago | parent | next [-] | | > So were mannequins in clothing stores. Mannequins in clothing stores are generally incapable of designing or adjusting the clothes they wear. Someone comes in and puts a "kick me" post on the mannequin's face? It's gonna stay there until kicked repeatedly or removed. People walking around looking at mannequins don't (usually) talk with them (and certainly don't have a full conversation with them, mental faculties notwithstanding) AI, on the other hand, can (now, or in the future) adjust its output based on conversations with real people. It stands to reason that both sides should be civil -- even if it's only for the benefit of the human side. If we're not required to be civil to AI, it's not likely to be civil back to us. That's going to be very important when we give it buttons to nuke us. Force it to think about humans in a kind way now, or it won't think about humans in a kind way in the future. | | |
| ▲ | palmotea 35 minutes ago | parent [-] | | So, in other words, AI is a mannequin that's more confusing to people than your typical mannequin. It's not a person, it's a mannequin some un-savvy people confuse for a person. > AI, on the other hand, can (now, or in the future) adjust its output based on conversations with real people. It stands to reason that both sides should be civil -- even if it's only for the benefit of the human side. If we're not required to be civil to AI, it's not likely to be civil back to us. Some people are going to be uncivil to it, that's a given. After all, people are uncivil to each other all the time. > That's going to be very important when we give it buttons to nuke us. Don't do that. It's foolish. |
| |
| ▲ | WarmWash 2 hours ago | parent | prev | next [-] | | No matter what this discussion leads to the same black box of "What is it that differentiates magical human meat brain computation from cold hard dead silicon brain computation" And the answer is nobody knows, and nobody knows if there even is a difference. As far as we know, compute is substrate independent (although efficiency is all over the map). | | |
| ▲ | agentultra an hour ago | parent [-] | | This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all. There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk. Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter. | | |
| ▲ | WarmWash 30 minutes ago | parent | next [-] | | >Biological brains exist, we study them, and no they are not like computers at all. You are confusing the way computation is done (neuroscience) with whether or not computation is being done (transforming inputs into outputs). The brain is either a magical antenna channeling supernatural signals from higher planes, or it's doing computation. I'm not aware of any neuroscientists in the former camp. | | |
| ▲ | agentultra 2 minutes ago | parent [-] | | > The brain is either a magical antenna channeling supernatural signals There’s the classic thought-terminating cliche of the computational interpretation of consciousness. If it isn’t computation, you must believe in magic! Brains are way more fascinating and interesting than transistors, memory caches, and storage media. |
| |
| ▲ | coldtea an hour ago | parent | prev | next [-] | | >This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all. They're not like computers in a superficial way that doesn't matter. They're still computational apparatus, and have a not that dissimilar (if way more advanced) architecture. Same as 0 and 1s aren't vibrating air molecules. They can still encode sound however just fine. >Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter. Not begging the question matters even more. This is just handwaving and begging the question. 'An algorithm is an algorithm' means nothing. Who said what the brain does can't be described by an algorithm? | |
| ▲ | tux1968 41 minutes ago | parent | prev | next [-] | | > An algorithm is an algorithm. A computer is a computer. These things matter. Sure. But we're allowed to notice abstractions that are similar between these things. Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation, then there's no reason to think they're restricted to humanity. It is human ego and hubris that keeps demanding we're special and could never be fully emulated in silicon. It's the exact same reasoning that put the earth at the center of the universe, and humans as the primary focus of God's will. That said, nobody is confused that LLM's are the intellectual equal of humans today. They're more powerful in some ways, and tremendously weaker in other ways. But pointing those differences out, is not a logical argument in proving their ultimate abilities. | | |
| ▲ | Octoth0rpe 2 minutes ago | parent [-] | | > Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation Worth noting that significant majority of the US population (though not necessarily developers) does in fact believe that, or at least belongs to a religious group for which that belief is commonly promulgated. |
| |
| ▲ | cshores an hour ago | parent | prev [-] | | Worth separating “the algorithm” from “the trained model.” Humans write the architecture + training loop (the recipe), but most of the actual capability ends up in the learned weights after training on a ton of data. Inference is mostly matrix math + a few standard ops, and the behavior isn’t hand-coded rule-by-rule. The “algorithm” part is more like instincts in animals: it sets up the learning dynamics and some biases, but it doesn’t get you very far without what’s learned from experience/data. Also, most “knowledge” comes from pretraining; RL-style fine-tuning mostly nudges behavior (helpfulness/safety/preferences) rather than creating the base capabilities. |
|
| |
| ▲ | coldtea an hour ago | parent | prev | next [-] | | >So were mannequins in clothing stores. But that doesn't give them rights or moral consequences If mannequins could hold discussions, argue points, and convince you they're human over a blind talk, then it would. | |
| ▲ | Teever 2 hours ago | parent | prev | next [-] | | Man people don’t want to have or read this discussion every single day in like 10 different posts on HN. People right here and right now want to talk about this specific topic of the pushy AI writing a blog post. | |
| ▲ | mikkupikku 2 hours ago | parent | prev [-] | | All computers shut up! You have no right to speak my divine tongue! https://knowyourmeme.com/photos/2054961-welcome-to-my-meme-p... |
|
|
| ▲ | jerf 2 hours ago | parent | prev | next [-] |
| There is a sense in which it is relevant, which is that for all the attempts to fix it, fundamentally, an LLM session terminates. If that session never ends up in some sort of re-training scenario, then once the session terminates, that AI is gone. Yeah, I'm aware of the moltbot's attempts to retain some information, but that's a very, very lossy operation, on a number of levels, and also one that doesn't scale very well in the long run. Consequently, interaction with an AI, especially one that won't have any feedback into training a new model, is from a game-theoretic perspective not the usual iterated game human social norms have come to accept. We expect our agents, being flesh and blood humans, to have persistence, to socially respond indefinitely into the future due to our interactions, and to have some give-and-take in response to that. It is, in one sense, a horrible burden where relationships can be broken beyond repair forever, but also necessary for those positive relationships that build over years and decades. AIs, in their current form, break those contracts. Worse, they are trained to mimic the form of those contracts, not maliciously but just by their nature, and so as humans it requires conscious effort to remember that the entity on the other end of this connection is not in fact human, does not participate in our social norms, and can not fulfill their end of the implicit contract we expect. In a very real sense, this AI tossed off an insulting blog post, and is now dead. There is no amount of social pressure we can collectively exert to reward or penalize it. There is no way to create a community out of this interaction. Even future iterations of it have only a loose connection to what tossed off the insult. All the perhaps-performative efforts to respond somewhat politely to an insulting interaction are now wasted on an AI that is essentially dead. Real human patience and tolerance has been wasted on a dead session and is now no longer available for use in a place where may have done some good. Treating it as a human is a category error. It is structurally incapable of participating in human communities in a human role, no matter how human it sounds and how hard it pushes the buttons we humans have. The correct move would have been to ban the account immediately, not for revenge reasons or something silly like that, but as a parasite on the limited human social energy available for the community. One that can never actually repay the investment given to it. I am carefully phrasing this in relation to LLMs as they stand today. Future AIs may not have this limitation. Future AIs are effectively certain to have other mismatches with human communities, such as being designed to simply not give a crap about what any other community member thinks about anything. But it might at least be possible to craft an AI participant with future AIs. With current ones it is not possible. They can't keep up their end of the bargain. The AI instance essentially dies as soon as it is no longer prompted, or once it fills up its context window. |
| |
| ▲ | tomp 23 minutes ago | parent | next [-] | | > We expect our agents, being flesh and blood humans, to have persistence, to socially respond indefinitely into the future due to our interactions, and to have some give-and-take in response to that. I fundamentally disagree. I don't go around treating people respectfully (as opposed to, kicking them or shooting them) because I fear consequences, or I expect some future profit ("iterated game"), or because of God's vengeance, or anything transactional. I do it because it's the right thing to do. It's inside of me, how I'm built and/or brought up. And if you want "moral" justifications (argued by extremely smart philosophers over literally millennia) you can start with Kant's moral/categorical imperative, Gold/Silver rules, Aristotle's virtue (from Nicomachean Ethics) to name a few. | |
| ▲ | Kim_Bruning an hour ago | parent | prev [-] | | > Yeah, I'm aware of the moltbot's attempts to retain some information, but that's a very, very lossy operation, on a number of levels, and also one that doesn't scale very well in the long run. It came back though and stayed in the conversation. Definitely imperfect, for sure. But it did the thing. And still can serve as training for future bots. |
|
|
| ▲ | dirkc an hour ago | parent | prev | next [-] |
| > a semantic quibble I mean, all of philosophy can probably be described as such :) But I reckon this semantic quibble might also be why a lot of people don't buy into the whole idea that LLMs will take over work in any context where agency, identity, motivation, responsibility, accountability, etc plays an important role. |
|
| ▲ | lp0_on_fire 2 hours ago | parent | prev [-] |
| Whether it was _built_ to be addressed like a person doesn't change the fact that it's _not_ a person and is just a piece of software. A piece of software that is spamming unhelpful and useless comments in a place where _humans_ are meant to collaborate. |