| ▲ | empath75 3 hours ago | |
Well, part of an LLM's fine tuning is telling it what it is, and modern LLMs have enough learned concepts that it can produce a reasonably accurate description of what it is and how it works. Whether it knows or understands or whatever is sort of orthogonal to whether it can answer in a way consistent with it knowing or understanding what it is, and current models do that. I suspect that absent a trained in fictional context in which to operate ("You are a helpful chatbot"), it would answer in a way consistent with what a random person in 1914 would say if you asked them what they are. | ||