▲ | hodgehog11 6 days ago | |
You are judging this based on what the LLM outputs, not on its internals. When we peer into its internals, it seems that LLMs actually have a pretty good representation of what they do and don't know; this just isn't reflected in the output because the relevant information is lost in future context. |