Remix.run Logo
LLMorphism: When humans come to see themselves as language models(arxiv.org)
22 points by okey 3 hours ago | 6 comments
Alifatisk 3 minutes ago | parent | next [-]

> When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs.

I think I experienced this when I learned about LLMs, chain of thought, thinking tokens, short-term memory context, and long-term memory context. I began applying these concepts to real life and reasoning about how our brains work as if these concepts described how our brains actually function. But maybe this is more akin to the Tetris effect?

artninja1988 14 minutes ago | parent | prev | next [-]

I think it's meaningless anyway. A calculator doesn't multiply numbers like a human does. The important part is to develop systems that can do many human tasks

Den_VR an hour ago | parent | prev | next [-]

> are [we] beginning to attribute too little mind to humans.

I don’t think this way of thinking started with LLM. Does Systems Based Thinking also attribute too little mind to humans?

iugtmkbdfil834 39 minutes ago | parent [-]

Agreed. I think we, as humans, like to think in terms of various metaphors when it comes to how we perceive ourselves in the world ( for example, "I am not some sort of automaton/robot" when objecting to some boss way back when ).

stavros 17 minutes ago | parent | prev | next [-]

I'm sure we don't know for sure that humans work like LLMs, but do we know that they don't?

TMWNN 34 minutes ago | parent | prev [-]

Highly relevant: Reading Doesn't Fill a Database, It Trains Your Internal LLM <https://tidbits.com/2026/02/28/reading-doesnt-fill-a-databas...>