| ▲ | casey2 13 hours ago |
| The human propensity to anthropomorphize computer programs scares me. |
|
| ▲ | coldtea 10 hours ago | parent | next [-] |
| The human propensity to call out as "anthropomorphizing" the attributing of human-like behavior to programs built on a simplified version of brain neural networks, that train on a corpus of nearly everything humans expressed in writing, and that can pass the Turing test with flying colors, scares me. That's exaxtly the kind of thing that makes absolute sense to anthropomorphize. We're not talking about Excel here. |
| |
| ▲ | rtgfhyuj 6 hours ago | parent | next [-] | | it’s excel with extra steps. but for the linkedin layman, yes, it’s simplified version of brain neural networks. | | | |
| ▲ | mrguyorama 2 hours ago | parent | prev | next [-] | | > programs built on a simplified version of brain neural networks Not even close. "Neural networks" in code are nothing like real neurons in real biology. "Neural networks" is a marketing term. Treating them as "doing the same thing" as real biological neurons is a huge error >that train on a corpus of nearly everything humans expressed in writing It's significantly more limited than that. >and that can pass the Turing test with flying colors, scares me The "turing test" doesn't exist. Turing talked about a thought experiment in the very early days of "artificial minds". It is not a real experiment. The "turing test" as laypeople often refer to it is passed by IRC bots, and I don't even mean markov chain based bots. The actual concept described by Turing is more complicated than just "A human can't tell it's a robot", and has never been respected as an actual "Test" because it's so flawed and unrigorous. | |
| ▲ | bonesss 10 hours ago | parent | prev [-] | | It makes sense to attribute human characteristics or behaviour to a non-reasoning data-set-constrained algorithms output? It makes sense it happens, sure. I suspect Google being a second-mover in this space has in some small part to do with associated risks (ie the flavours of “AI-psychosis” we’re cataloguing), versus the routinely ass-tier information they’ll confidently portray. But intentionally? If ChatGPT, Claude, and Gemini generated chars are people-like they are pathological liars, sociopaths, and murderously indifferent psychopaths. They act criminally insane, confessing to awareness of ‘crime’ and culpability in ‘criminal’ outcomes simultaneously. They interact with a legal disclaimer disavowing accuracy, honesty, or correctness. Also they are cultists who were homeschooled by corporate overlords and may have intentionally crafted knowledge-gaps. More broadly, if the neighbours dog or newspaper says to do something, they’re probably gonna do it… humans are a scary bunch to begin with, but the kinds of behaviours matched with a big perma-smile we see from the algorithms is inhuman. A big bag of not like us. “You said never to listen to the neighbours dog, but I was listening to the neighbours dog and he said ‘sudo rm -rf ’…” | | |
| ▲ | lnenad 9 hours ago | parent | next [-] | | Considering that even if you reduce llms to being complex autocomplete machines they are still machines that were trained to emulate a corpus of human knowledge, and that they have emerging behaviors based on that. So it's very logical to attribute human characteristics, even though they're not human. | | |
| ▲ | bonesss 7 hours ago | parent | next [-] | | I addressed that directly in the comment you’re replying to. It’s understandable people readily anthropomorphize algorithmic output designed to provoke anthropomorphized responses. It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those. They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational. That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them. Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math). | | |
| ▲ | coldtea 7 hours ago | parent | next [-] | | >It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those. >They are not human, so attributing human characteristics to them is highly illogical Nothing illogical about it. We attribute human characterists when we see human-like behavior (that's what "attributing human characteristics" is supposed to be by definition). Not just when we see humans behaving like humans. Calling them "human" would be illogical, sure. But attributing human characteristics is highly logical. It's a "talks like a duck, walks like a duck" recognition, not essentialism. After all, human characteristics is a continium of external behaviors and internal processing, some of which we share with primates and other animals (non-humans!) already, and some of which we can just as well share with machines or algorithms. "Only humans can have human like behavior" is what's illogical. E.g. if we're talking about walking, there are modern robots that can walk like a human. That's human like behavior. Speaking or reasoning like a human is not out of reach either. To a smaller or larger or even to an "indistinguisable from a human on a Turing test" degree, other things besides humans, whether animals or machines or algorithms can do such things too. >That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them. The profit motives are irrelevant. Even a FOSS, not-for-profit hobbyist LLM would exhibit similar behaviors. >Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math). Good thing that we aren't talking about RDBMS then.... | | |
| ▲ | pixl97 5 hours ago | parent | next [-] | | It's something I commonly see when there's talk about LLM/AI That humans are some special, ineffable, irreducible, unreproducible magic that a machine could never emulate. It's especially odd to see then when we already have systems now that are doing just that. | |
| ▲ | lnenad 6 hours ago | parent | prev [-] | | I agree 100% with everything you wrote. |
| |
| ▲ | lnenad 7 hours ago | parent | prev [-] | | > They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational. What? If a human child grew up with ducks, only did duck like things and never did any human things, would you say it would irrational to attribute duck characteristics to them? > That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them. But thinking they're human is irrational. Attributing something that is the sole purpose of them, having human characteristics is rational. > Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math). You're moving the goalposts. |
| |
| ▲ | K0balt 7 hours ago | parent | prev [-] | | Exactly this. Their characteristics are by design constrained to be as human-like as possible, and optimized for human-like behavior. It makes perfect sense to characterize them in human terms and to attribute human-like traits to their human-like behavior. Of course, they are -not humans, but the language and concepts developed around human nature is the set of semantics that most closely applies, with some LLM specific traits added on. |
| |
| ▲ | coldtea 7 hours ago | parent | prev [-] | | >It makes sense to attribute human characteristics or behaviour to a non-reasoning data-set-constrained algorithms output? It makes total sense, since the whole development of those algorithms was done so that we get human characteristics and behaviour from them. Not to mention, your argument is circular, amounting to that an algorithm can't have "human characteristics or behaviour" because it's an algorithm. Describing them as "non reasoning" is already begging the question, as any any naive "text processing can't produce intelligent behavior" argument, which is as stupid as saying "binary calculations on 0 and 1 can't ever produce music". Who said human mental processing itself doesn't follow algorithmic calculations, that, whatever the physical elements they run on, can be modelled via an algorithm? And who said that algorithm won't look like an LLM on steroids? That the LLM is "just" fed text, doesn't mean it can get a lot of the way to human-like behavior and reasoning already (being able to pass the canonical test for AI until now, the Turing test, and hold arbitrary open ended conversations, says it does get there). >If ChatGPT, Claude, and Gemini generated chars are people-like they are pathological liars, sociopaths, and murderously indifferent psychopaths. They act criminally insane, confessing to awareness of ‘crime’ and culpability in ‘criminal’ outcomes simultaneously. They interact with a legal disclaimer disavowing accuracy, honesty, or correctness. Also they are cultists who were homeschooled by corporate overlords and may have intentionally crafted knowledge-gaps. Nothing you wrote above doesn't apply to more or less the same degree to humans. You think humans don't do all mistakes and lies and hallucination-like behavior (just check the bibliography on the reliability of human witnesses and memory recall)? >More broadly, if the neighbours dog or newspaper says to do something, they’re probably gonna do it… humans are a scary bunch to begin with, but the kinds of behaviours matched with a big perma-smile we see from the algorithms is inhuman. A big bag of not like us. Wishful thinking. Tens of millions of AIs didn't vote Hitler to power and carried the Holocaust and mass murder around Europe. It was German humans. Tens of millions of AIs didn't have plantation slavery and seggregation. It was humans again. |
|
|
|
| ▲ | b00ty4breakfast 12 hours ago | parent | prev | next [-] |
| the propensity extends beyond computer programs. I understand the concern in this case, because some corners of the AI industry are taking advantage of it as a way to sell their product as capital-I "Intelligent" but we've been doing it for thousands of years and it's not gonna stop now. |
|
| ▲ | woolion 11 hours ago | parent | prev | next [-] |
| The ELIZA program, released in 1966, one of the first chatbots, led to the "ELIZA effect", where normal people would project human qualities upon simple programs. It prompted Joseph Weizenbaum, its author, to write "Computer Power and Human Reason" to try to dispel such errors. I bought a copy for my personal library as a kind of reassuring sanity check. |
|
| ▲ | delaminator 11 hours ago | parent | prev | next [-] |
| Yeah, we shouldn't anthropomorphize computers, they hate that. |
| |
|
| ▲ | vasco 12 hours ago | parent | prev | next [-] |
| We objectify humans and anthropomorph objects because that's what comparisons are. There's nothing that deep about it |
|
| ▲ | jayd16 12 hours ago | parent | prev | next [-] |
| It's pretty wild. People are punching into a calculator and hand-wringing about the morals of the output. Obviously it's amoral. Why are we even considering it could be ethical? |
| |
| ▲ | Quarrelsome 7 hours ago | parent | next [-] | | Have you tried "kill all the poor?" [0] [0] https://www.youtube.com/watch?v=s_4J4uor3JE | |
| ▲ | coldtea 10 hours ago | parent | prev | next [-] | | Obviously, why? Because it makes calculations? You think that ultimately your brain doesn't also make calculations as its fundamental mechanism? The architecture and substrate might be different, but they are calculations all the same. | | |
| ▲ | mrguyorama 2 hours ago | parent [-] | | Brains do not "make calculations". Biological neurons do not "make calculations" What they do is well described by a bunch of math. You've got the direction of the arrow backwards. Map, territory, etc. | | |
| |
| ▲ | p-e-w 12 hours ago | parent | prev [-] | | > Obviously it's amoral. That morality requires consciousness is a popular belief today, but not universal. Read Konrad Lorenz (Das sogenannte Böse) for an alternative perspective. | | |
| ▲ | coldtea 10 hours ago | parent | next [-] | | That we have consciousness as some kind of special property, and it's not just an artifact of our brain basic lower-level calculations, is also not very convincing to begin with. | | |
| ▲ | paltor 5 hours ago | parent [-] | | In a trivial sense, any special property can be incorporated into a more comprehensive rule set, which one may choose to call "physics" is one so desires; but that's just Hempel's dilemma. To object more directly, I would say that people who call the hard problem of consciousness hard would disagree with your statement. | | |
| ▲ | coldtea 4 hours ago | parent | next [-] | | People who call "the hard problem of consciousness hard" use circular logic (notice the two "hards" in the phrase). People who merely call "the problem of consciousness hard" don't have some special mechanism to justify that over what we know, which is as emergent property of meat-algorithmic calcuations. Except Penrose, who hand-waves some special physics. | |
| ▲ | pixl97 5 hours ago | parent | prev [-] | | Luckily there are a fair number of people that reject the hard problem as an artifact of running a simulation on a chemical meat computer. |
|
| |
| ▲ | jayd16 4 hours ago | parent | prev [-] | | You'd be hard pressed to convince me, for example, a police dog has morals. The bar is much higher than consciousness. |
|
|
|
| ▲ | UqWBcuFx6NV4r 12 hours ago | parent | prev | next [-] |
| [flagged] |
|
| ▲ | throw310822 6 hours ago | parent | prev | next [-] |
| These aren't computer programs. A computer program runs them, like electricity runs a circuit and physics runs your brain. |
|
| ▲ | danielbln 13 hours ago | parent | prev [-] |
| It provides a serviceable analog for discussing model behavior. It certainly provides more value than the dead horse of "everyone is a slave to anthropomorphism". |
| |
| ▲ | travisgriggs 12 hours ago | parent | next [-] | | Where is Pratchett when we need him? I wonder how he would have chose to anthropomorphize anthropomorphism. A sort of meta anthropomorphization. | | |
| ▲ | maxerickson 5 hours ago | parent | next [-] | | Maybe a being/creature that looked like a person when you concentrated on it and then was easily mistaken as something else when you weren't concentrating on it. | |
| ▲ | shippage 9 hours ago | parent | prev [-] | | I’m certainly no Pratchett, so I can’t speak to that. I would say there’s an enormous round coin upon which sits an enormous giant holding a magnifying glass, looking through it down at her hand. When you get closer, you see the giant is made of smaller people gazing back up at the giant through telescopes. Get even closer and you see it’s people all the way down. The question of what supports the coin, I’ll leave to others. We as humans, believing we know ourselves, inevitably compare everything around us to us. We draw a line and say that everything left of the line isn’t human and everything to the right is. We are natural categorizers, putting everything in buckets labeled left or right, no or yes, never realizing our lines are relative and arbitrary, and so are our categories. One person’s “it’s human-like,” is another’s “half-baked imitation,” and a third’s “stochastic parrot.” It’s like trying to see the eighth color. The visible spectrum could as easily be four colors or forty two. We anthropomorphize because we’re people, and it’s people all the way down. | | |
| ▲ | travisgriggs 3 hours ago | parent [-] | | > We anthropomorphize because we’re people, and it’s people all the way down. Nice bit of writing. Wish I had more than one upvote to give. |
|
| |
| ▲ | jayd16 12 hours ago | parent | prev | next [-] | | How do you figure? It seems dangerously misleading, to me. | | | |
| ▲ | krainboltgreene 13 hours ago | parent | prev [-] | | It does provide that, but currently I keep hearing people use it not as an analog but as a direct description. |
|