▲ | jibal 18 hours ago | |
> Humans have thoroughly evaluated LLMs and discovered many patterns that very cleanly map to what they would consider real-world concepts Well yes, humans have real-world concepts. > Semantic properties do not require any human-level understanding Strawman. > a Python script has specific semantics one may use to discuss its properties These are human-attributed semantics. To say that a static script "has" semantics is a category mistake--certainly it doesn't "have" them the way LLMs are purported by the OP to have concepts. > it has become increasingly clear that LLMs can reason (as in derive knowable facts, extract logical conclusions, compare it to different alternatives These are highly controversial claims. LLMs present conclusions textually that are implicit in the training data. To get from there to the claim that they can reason is a huge leap. Certainly we know (from studies by Anthropic and elsewhere) that the reasoning steps that LLMs claim to go through are not actual states of the LLM. I'm not going to say more about this ... it has been discussed at length in the academic literature. |