Remix.run Logo
Mumps 3 hours ago

This is an anthropomorphization. LLMs do not think they are anything, no concept of self, no thinking at all (despite the lovely marketing around thinking/reasoning models). I'm quite sad that more hasn't been done to dispel this.

When you ask gpt 4.1 et c to describe itself, it doesn't have singular concept of "itself". It has some training data around what LLMs are in general and can feed back a reasonable response given.

empath75 3 hours ago | parent [-]

Well, part of an LLM's fine tuning is telling it what it is, and modern LLMs have enough learned concepts that it can produce a reasonably accurate description of what it is and how it works. Whether it knows or understands or whatever is sort of orthogonal to whether it can answer in a way consistent with it knowing or understanding what it is, and current models do that.

I suspect that absent a trained in fictional context in which to operate ("You are a helpful chatbot"), it would answer in a way consistent with what a random person in 1914 would say if you asked them what they are.