| ▲ | lnenad 9 hours ago | ||||||||||||||||||||||||||||
Considering that even if you reduce llms to being complex autocomplete machines they are still machines that were trained to emulate a corpus of human knowledge, and that they have emerging behaviors based on that. So it's very logical to attribute human characteristics, even though they're not human. | |||||||||||||||||||||||||||||
| ▲ | bonesss 7 hours ago | parent | next [-] | ||||||||||||||||||||||||||||
I addressed that directly in the comment you’re replying to. It’s understandable people readily anthropomorphize algorithmic output designed to provoke anthropomorphized responses. It is not desire-able, safe, logical, or rational since (to paraphrase:), they are complex text transformation algorithms that can, at best, emulate training data reinforced by benchmarks and they display emergent behaviours based on those. They are not human, so attributing human characteristics to them is highly illogical. Understandable, but irrational. That irrationality should raise biological and engineering red flags. Plus humanization ignores the profit motives directly attached to these text generators, their specialized corpus’s, and product delivery surrounding them. Pretending your MS RDBMS likes you better than Oracles because it said so is insane business thinking (in addition to whatever that means psychologically for people who know the truth of the math). | |||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||
| ▲ | K0balt 7 hours ago | parent | prev [-] | ||||||||||||||||||||||||||||
Exactly this. Their characteristics are by design constrained to be as human-like as possible, and optimized for human-like behavior. It makes perfect sense to characterize them in human terms and to attribute human-like traits to their human-like behavior. Of course, they are -not humans, but the language and concepts developed around human nature is the set of semantics that most closely applies, with some LLM specific traits added on. | |||||||||||||||||||||||||||||