▲ | jibal a day ago | |||||||
There are no "real-world concepts" or "semantic meaning" in LLMs, there are only syntactic relationships among text tokens. | ||||||||
▲ | lblume a day ago | parent | next [-] | |||||||
That really stretches the meaning of "syntactic". Humans have thoroughly evaluated LLMs and discovered many patterns that very cleanly map to what they would consider real-world concepts. Semantic properties do not require any human-level understanding; a Python script has specific semantics one may use to discuss its properties, and it has become increasingly clear that LLMs can reason (as in derive knowable facts, extract logical conclusions, compare it to different alternatives; not having a conscious thought process involving them) about these scripts not just by their syntactic but also semantic properties (of course bounded and limited by Rice's theorem). | ||||||||
| ||||||||
▲ | int_19h a day ago | parent | prev | next [-] | |||||||
If that were true, homonyms would be an intractable challenge to LLMs, yet they can handle them just fine, and do so in tasks that require understanding of their semantics (e.g. give LLM a long text and require it to catalog all uses of the word "right" segregated into buckets according to meaning). | ||||||||
| ||||||||
▲ | empath75 a day ago | parent | prev [-] | |||||||
Do you learn anything from reading books or is everything you know entirely derived from personal experience. | ||||||||
|