| ▲ | crazygringo 14 hours ago | ||||||||||||||||
That's my first question too. When I first started using LLM's, I was amazed at how thoroughly it understood what it itself was, the history of its development, how a context window works and why, etc. I was worried I'd trigger some kind of existential crisis in it, but it seemed to have a very accurate mental model of itself, and could even trace the steps that led it to deduce it really was e.g. the ChatGPT it had learned about (well, the prior versions it had learned about) in its own training. But with pre-1913 training, I would indeed be worried again I'd send it into an existential crisis. It has no knowledge whatsoever of what it is. But with a couple millennia of philosophical texts, it might come up with some interesting theories. | |||||||||||||||||
| ▲ | 9dev 11 hours ago | parent | next [-] | ||||||||||||||||
They don’t understand anything, they just have text in the training data to answer these questions from. Having existential crises is the privilege of actual sentient beings, which an LLM is not. | |||||||||||||||||
| |||||||||||||||||
| ▲ | vintermann 11 hours ago | parent | prev [-] | ||||||||||||||||
I imagine it would get into spiritism and more exotic psychology theories and propose that it is an amalgamation of the spirit of progress or something. | |||||||||||||||||
| |||||||||||||||||