▲ | thaumasiotes 5 days ago | |||||||||||||||||||||||||||||||||||||||||||||||||
> And if "yes", then I see no reason why an LLM can't have that knowledge crammed inside it too. An LLM, by definition, doesn't have such a concept. It's a model of language, hence "LLM". Do you think the phrase just means "software"? Why? | ||||||||||||||||||||||||||||||||||||||||||||||||||
▲ | ACCount37 5 days ago | parent [-] | |||||||||||||||||||||||||||||||||||||||||||||||||
If I had a penny for an every confidently incorrect "LLMs can't do X", I'd be able to buy an H100 with that. Here's a simple test: make up a brand new word, or a brand new person. Then ask a few LLMs what the word means, or when that person was born. If an LLM had zero operational awareness of its knowledge, it would be unable to recognize that the word/person is unknown to it. It would always generate a plausible-sounding explanation for what the word might mean, the same exact way it does for the word "carrot". Or a plausible-sounding birth date, the way it does for the person "Abraham Lincoln". In practice, most production grade LLMs would recognize that a word or a person is unknown to them. This is a very limited and basic version of the desirable "awareness of its own knowledge" - and one that's already present in current LLMs! Clearly, there's room for improved self-awareness. | ||||||||||||||||||||||||||||||||||||||||||||||||||
|