▲ | mallowdram a day ago | |||||||
Space embedding based on arbitrary points never resolves to specifics. Particularly downstream. Words are arbitrary, we remained lazy at an unusually vague level of signaling because arbitrary signals provide vast advantages for the sender and controller of the signal. Arbitrary signals are essentially primate dominance tools. They are uniquely one-way. CS never considered this. It has no ability to subtract that dark matter of arbitrary primate dominance that's embedded in the code. Where is this in embedded space? LLMs are designed for Western concepts of attributes, not holistic, or Eastern. There's not one shred of interdependence, each prediction is decontextualized, the attempt to reorganize by correction only slightly contextualizes. It's the object/individual illusion in arbitrary words that's meaningless. Anyone studying Gentner, Nisbett, Halliday can take a look at how LLMs use language to see how vacant they are. This list proves this. LLMs are the equivalent of circus act using language. "Let's consider what we mean by "concepts" in an embedding space. Language models don't deal with perfectly orthogonal relationships – real-world concepts exhibit varying degrees of similarity and difference. Consider these examples of words chosen at random: "Archery" shares some semantic space with "precision" and "sport" "Fire" overlaps with both "heat" and "passion" "Gelatinous" relates to physical properties and food textures "Southern-ness" encompasses culture, geography, and dialect "Basketball" connects to both athletics and geometry "Green" spans color perception and environmental consciousness "Altruistic" links moral philosophy with behavioral patterns" | ||||||||
▲ | ausbah a day ago | parent | next [-] | |||||||
aren’t outputs literally conditioned on prior textual context? how is that lacking interdependence? isn’t learning the probabilistic relationships between tokens an attempt to approximate those exact semantic relationships between words? | ||||||||
| ||||||||
▲ | troelsSteegin a day ago | parent | prev [-] | |||||||
> Arbitrary signals are essentially primate dominance tools. What should I read to better understand this claim? > LLMs are the equivalent of circles act using language. Circled apes? | ||||||||
|