| ▲ | bluefirebrand 2 hours ago | |
> It should be trained to answer when it knows the answer, and to state that it does not know the answer when it does not Do LLMs even have any kind of internal model of what they know or don't know? My understanding is that they don't. | ||