▲ | heyjamesknight 4 days ago | |
No. The fundamental encoding unit of an LLM is semantic. Mapping reality into semantic space is a form of lossy compression. There are entire categories of experience that can't be properly modeled in semantic space. Even in "multimodal" models, text is still the fundamental unit of data storage and transformation between the modes. That's not the case for how your brain works—you don't see a pigeon, label it as "pigeon," and then refer to your knowledge about "pigeons". You just experience the pigeon. We have 100K years of homo sapiens thriving without language. "General Intelligence" occurs at a level above semantics. |