Remix.run Logo
mallowdram a day ago

Space embedding based on arbitrary points never resolves to specifics. Particularly downstream. Words are arbitrary, we remained lazy at an unusually vague level of signaling because arbitrary signals provide vast advantages for the sender and controller of the signal. Arbitrary signals are essentially primate dominance tools. They are uniquely one-way. CS never considered this. It has no ability to subtract that dark matter of arbitrary primate dominance that's embedded in the code. Where is this in embedded space?

LLMs are designed for Western concepts of attributes, not holistic, or Eastern. There's not one shred of interdependence, each prediction is decontextualized, the attempt to reorganize by correction only slightly contextualizes. It's the object/individual illusion in arbitrary words that's meaningless. Anyone studying Gentner, Nisbett, Halliday can take a look at how LLMs use language to see how vacant they are. This list proves this. LLMs are the equivalent of circus act using language.

"Let's consider what we mean by "concepts" in an embedding space. Language models don't deal with perfectly orthogonal relationships – real-world concepts exhibit varying degrees of similarity and difference. Consider these examples of words chosen at random: "Archery" shares some semantic space with "precision" and "sport" "Fire" overlaps with both "heat" and "passion" "Gelatinous" relates to physical properties and food textures "Southern-ness" encompasses culture, geography, and dialect "Basketball" connects to both athletics and geometry "Green" spans color perception and environmental consciousness "Altruistic" links moral philosophy with behavioral patterns"

ausbah a day ago | parent | next [-]

aren’t outputs literally conditioned on prior textual context? how is that lacking interdependence?

isn’t learning the probabilistic relationships between tokens an attempt to approximate those exact semantic relationships between words?

mallowdram a day ago | parent [-]

Interdependence takes into account the Universe for each thing or idea. There is no such thing as probabilistic in a healthy mind. A probabilistic approach is unhealthy.

https://pubmed.ncbi.nlm.nih.gov/38579270/

edit: looking into this, this is likely in terms of the brain and arbitrariness highly paradoxical even oxymoronic

>>isn’t learning the probabilistic relationships between tokens an attempt to approximate those exact semantic relationships between words?

This is really a poor manner of resolving the conduit metaphor condition to arbitary signals, to falsify them as specific, which is always impossible. This is simple linguistic via animal signal science. If you can't duplicate any response with a high degreee of certainty from output, then the signal is only valid in the most limited time-space condition and yet it is still arbitrary. CS has no understanding of this.

troelsSteegin a day ago | parent | prev [-]

> Arbitrary signals are essentially primate dominance tools.

What should I read to better understand this claim?

> LLMs are the equivalent of circles act using language.

Circled apes?

mallowdram a day ago | parent [-]

Basil Bernstein's 1973 studies comparing English and math comprehension differences in class. Halliday's Language and Society Vol 10 Primate Psychology Mastripietri Apes and Evolution Tuttle Symbolic Species Deacon Origin of Speech MacNeilage

That's the tip of the iceberg

edit: As CS doesn't understand the parasitic or viral aspects of language and simply idealizes it, it can't access it. It's more of a black box than the coding of these. I can't understand how CS assumed this would ever work. It makes no sense to exclude the very thing that language is and then automate it.